Partnerships Power the Smart Grid of the Future

Our energy landscape is undergoing rapid transformation. The power grid is no longer just infrastructure; it’s a strategic enabler for sustainability across industries, allowing us to electrify processes traditionally reliant on fossil fuels. But this transition requires a modern, collaborative approach.

In this podcast, industry thought leaders emphasize the urgency for grid modernization and the importance of partnerships among operators, manufacturers, and service providers. They discuss the need for standardized technologies and open digital architectures to optimize investments for a smarter, future-ready grid.

Join us as we dive into the challenges and solutions, and learn how to optimize grid performance while balancing energy demands.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: Advantech, Capgemini, CCS Insight, Enedis, and Schneider Electric

Our guests this episode are:

Podcast Topics

Paul, Philippe, Valerie, Marc, and Ian answer our questions about:

  • 06:37 – Current state of grid modernization efforts
  • 12:13 – What drives the grid of the future
  • 20:25 – Demand pressures and technology limitations
  • 27:08 – Renewable energy challenges and considerations
  • 30:28 – Digital technologies shaping the smart grid
  • 37:39 – Bringing edge AI into the energy space
  • 41:20 – The role of the substation in grid modernization
  • 45:23 – Future-proofing ongoing smart-grid efforts
  • 49:38 – Working with partners like Intel and the E4S Alliance
  • 55:20 – Customer examples and use cases

Related Content

To learn more about grid modernization, read The Grid of the Future. For the latest innovations from:

Transcript

Christina Cardoza: Hello, and welcome to “insight.tech Talk,” where we explore the latest technology trends and innovations. As always, I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about the smart grid of the future. And we have a panel of expert guests from Advantech, Capgemini, CCS Insight, Enedis, and Schneider Electric.

But first I would love to get to know more about our guests and what they’re doing in this space. Paul from Advantech, I’ll start with you. What can you tell us about the company and what you guys are doing in the energy space?

Paul O’Shaughnessy: So, firstly my name is Paul O’Shaughnessy. I’m the Sales Director for Northern Europe and the Sector Head for energy and utilities in Europe for Advantech. A little bit about Advantech first: We’re a Taiwanese-headquartered company established in 1983. We’re a leading IPC market share company. We service a lot of industries—energy and utilities is one of those sectors; it’s primarily my focus. And we have three manufacturing plants globally in Japan, China, and Taiwan and looking to expand on that.

What we do in the energy sector today is—and what we would be known for is—basically a combination of fan and fanless systems that are used in primary and secondary substations for centralized protection and control. And we would also be kind of a domain leader in that hardware in terms of the development of that hardware for the future, for the next trends in terms of digitalizing the grid.

The other areas that we’ve been involved in have been in the connectivity, secure connectivity, to remote assets on the grid. And that’s something I’ll talk to a little later as we go through the podcast. The big change for us right now is that we’re realigning the business to become vertically focused rather than regionally focused. So, very significant focus from headquarters and a drive to focus on energy and utilities.

Christina Cardoza: Great. Looking forward to digging in a little bit more about what that means for the future and for the grid. But next I will start with Philippe from Capgemini. Tell us a little bit more about yourself and what Capgemini is doing in this space.

Philippe Vié: Hello, everyone. So, Philippe Vié, Energy Transition and Utilities Capgemini Advisor. I was the former Head of the Energy Transition and Utilities sector within the company. I am participating in many projects about grid modernization, which is definitely one of the key offers of the company. The company, covering consulting, engineering and application services, plus insight and data, intelligent industry, and many others over 50 countries in the world. And we see a significant growth in grid modernization these days.

Christina Cardoza: Thank you, Philippe. Ian, I’ll throw it to you next. CCS Insight just published a paper on this very topic, with contributions from various different guests on the podcast today: “The Grid of the Future.” It’s available on insight.tech, so we’ll dig into some of that. But before we get into that, what can you tell us about what you’re doing at CCS Insight and if there’s anything you can tease up about that report we have.

Ian Fogg: So I’m Ian Fogg. I’m a Research Director at CCS Insight. We’re an industry-analyst company that looks at technology transformation across a number of different areas. For this piece, we looked a lot about what’s happening with the grid and talked to many of the people on this podcast and many other companies as well. We also looked at what the parallels are with other sectors, with digitization in other industries, and what the implications are for the grid of the future from those wider perspectives.

Christina Cardoza: Awesome. I love that. So far we’ve had—we have an analyst, we have hardware, we have software—we have a lot of people from different spaces in this area. So I think it’s going to be a great conversation. And then also joining us we have Marc from Enedis. So, what can you tell us about what Enedis does and where you see the future of the grid going?

Marc Delandre: Hello, I am Marc Delandre from Enedis. I am Director for Advanced Network Technologies within Enedis. Enedis is a main French DSO and probably one of the largest in the world. Enedis is a full subsidiary of EDF. We are operating the medium and low-voltage network in France. It means over 1.4 million kilometers, 800,000 secondary substations, and almost 40 million customers. And we have to face on a daily basis with more and more renewables and charging points connected to the grid. And it’s a big challenge for us for the coming months and years.

Christina Cardoza: Yeah, I expect renewables to be a big part of this conversation that we’ll get into. But before we get there, last but not least—and I did the introductions by company alphabetical order, so that’s the only reason why we kept Valerie last. But we saved the best for last. Valerie from Schneider Electric, what can you tell us about what you do in this space and Schneider?

Valerie Layan: Yeah, sure. Hi, everyone. So very happy to be here today with all of you. I’m Valerie Layan. I’m the Vice President in charge of what we call Power and Grid Segment in Europe. So this is all the chain of energy from the power generation, transmission, distribution, down to what we call the “prosumer.”

Schneider Electric is really the leader and the specialist in energy management and industrial automation. So we do everything we can to help our customer get their infrastructure more resilient, more efficient, and more sustainable. And we accompany them with what we call EcoStruxure, which is IoT architecture, from the connected product up to software-analytics layers to really make their network smarter.

Christina Cardoza: That’s great. And that’s sort of where I want to start the conversation off today, with you, Valerie. You just mentioned you’re helping customers be more resilient, efficient, sustainable. So, since you’re working with all of these different customers, and I imagine they’re in various states of their transformations or ability to innovate, what would you say is the current state of the grid and our efforts to make it smarter?

Valerie Layan: I would say that the grid is not smart enough today and requires much more digitization to make it more efficient, as I mentioned, but also flexible and decarbonized. So for that we have some standards like IEC 61850-2 that is contributing to basically make it more standardized in terms of a substation, what we can do. But we will need the 61850-3 to really make it even more efficient with the start of virtualization.

The root cause of all this need—I want to do a kind of step back. Why do we need the grid to be smarter or more digital? It’s really coming from a pressure at the EU level to have a better mix of renewable, and we have to grow from 23% of renewable in the mix in ‘22 to 42.5%. Actually the nice aim is even 45% by 2030. Which means it’s put a lot of pressure onto the grid to integrate this renewable, and this is happening at the edge.

And we cannot anymore imagine to invest in a lot of hardware and CapEx to absorb that capacity both on this generation side and on the demand side, because I don’t even speak about the demand for more electrification, EV, etcetera. We all know this pressure on the demand side. So we need to make the grid more digitized and smarter to not only count on the CapEx—which usually is taking five years as a project—but to make it more efficient by putting extra software to make it smarter.

So, for example, if we implement or deploy ADMS solution, we know we can reduce at bare minimum the technical losses by one point, which means basically having it more efficient, more resilient. And we know, for example with the case with Enel in Italy, where when we deploy inside, they were saving up to €10 million per year on investment that they saved basically by putting that.

So this is key, this digitization, and the journey is long, and we are not yet there. So we are really pushing all the ecosystem to make sure that we actually invest on the software. And just one last point of reference: Usually when we invest $1 or €1 on renewable, we should invest the same on grid. And actually it’s more: one on renewable for 75¢ on grid. So we see that there is a requirement to invest more on the grid.

Christina Cardoza: I always love when we talk about these transformations being a journey, because it is a journey and it’s not a cookie cutter approach either. Companies and everybody may be in different parts of the journey, and they may need different things as part of their journey. So we’re going to dive into a lot of these different journeys and how we can successfully do that.

But I’m curious before we get there, Ian, with the smart grid report that CCS Insight did, are you seeing some of the similar things about where we are with the grid and the transformations as Valerie and Schneider’s outlook is?

Ian Fogg: Well, obviously on the generation side there’s been a massive shift to renewables, and we have comparison of 2010 and 2023 in the report with some figures across 48 markets. The interesting piece there is when you look at where the growth has come from within renewables, if you look at hydro, wind, and solar, it’s come from solar and wind, which have very different patterns of generation. Which I think leads to what Valerie is saying about why you need to also invest in the grid alongside the renewable side.

I think the other piece that’s interesting is it’s not just the—I think, which I think one of the speakers just touched on—is it’s not just the shift within the electricity-generation industry to renewables. As the wider economy decarbonizes, that shifts a greater proportion of the overall energy needs of each country to electricity as well. And that has another dynamic.

And then on the consumption side or the distribution side you have not just new consumption with things like EVs, you’ve also got generation happening with solar panels, which can make that potentially a two-way dynamic, which is quite different to what that distribution grid used to do. And then you think about, well, how do you balance the consumption with the generation?

And you can obviously look at storage solutions, but in other ways having greater intelligence in the grid to encourage people to shift the consumption pattern across hours of the day. And there you need very, very good, very speedy data communication between the different parts of the grid, right the way from the users, right the way through the distribution to help you balance that need. And that requires new investment in technology, in substations, in billing systems, in all kinds of parts of the grid.

Christina Cardoza: Yeah, and I think the changes can be so complicated and confusing sometimes. I typically think the best way to be successful and for it to be able to scale and to be broad across different countries—like you were mentioning. But like you mentioned, different countries need different things. And I think almost everybody on this podcast, we’re all in different places throughout the world.

So I want to look at some of the factors that are pointing to the smart grid that are driving this effort. Philippe from Capgemini, can you tell us a little bit about what you’re seeing and what the demand or pressure is from Capgemini?

Philippe Vié: Absolutely, Christina. And I will build on Valerie and Ian’s points, definitely. First of all, renewables are intermittent, which means that some hours of the day you have too much generation compared to the consumption, and some hours of the day you have not enough generation compared to the consumption. So we need to definitely balance.

And there is a lot of pressure on the electric system and on the electric grids for this production-equals-consumption balance—the Kirchhoff law, definitely. This overcapacity, some periods of the day will make the markets with many negative episodes, negative price episodes, that are endangering the energy transition because it’s endangering the profitability of generation players amongst which renewables players.

Secondly, there is a paradigm shift, because in the past the energy sources were centralized and the electricity was flowing through transmission and distribution grids from the centralized generation assets. Today the renewables are distributed in the grid, and it makes a paradigm shift from one-way to two-way electricity flow.

Then we have the massive electrification. We are today at 23% of electrification compared to the energy needs. And we will move probably from the scenario—the Net Zero Scenario from the International Energy Agency—to about 50% or 60% depending on the region of the world. Meaning that we have EV charging; we have electric heating for industry and for buildings and residential customers; we have hydrogen production; we have a data center, which accounts today for 2% of the electricity and which will account for 4% of electricity. Very strange: Today the consumption of data center equals the production of France. It’s a significant country. We have also hydrogen production; we have also storage, which is coming.

Then the other drivers are about the digital technology, which has progressed a lot—the convergence between IT and OT, the AI, Gen AI tomorrow. Automation can be leveraged to avoid electrical physical investments on networks. As Valerie stated, any time you put $1 on renewables, you need to put $1 on grids. And it will mean that we will move from $400 billion on grid investment today to $700 billion by the end of the decade, by 2030. It’s a huge increase in the investment, and it will increase the price of the electricity, of course.

Christina Cardoza: Yeah, a lot going on driving these efforts, and a lot that we still need to address and begin to even make dents in. Marc, from Enedis’s standpoint—because you guys are coming in and you’re part of this conversation and the innovations of the grid from a different perspective. So are you feeling those same drivers and factors? And what pressures do you feel from your end?

Marc Delandre: I fully agree with what has been said by Valerie, Ian, Paul, and Philippe. Electricity is magic. You can do almost everything with electricity. You can do heating, air conditioning, lighting, cooking. You can use electricity for a car, for trains, and so on. Electricity can be generated by big units—nuclear plants, by solar panels, by wind farm, and so on.

But there is a big issue with electricity. It’s not impossible but very difficult to store electricity. You know, we have the experience of electric cars, and the main issue with electric cars is the range of the cars, the sizing of the batteries. So the challenge we have to deal with is to balance in real time energy consumption and generation. And it has to be done at the level of the primary substation, at the level of the secondary substation, and by any customer connected to the grid.

And to do that we need tools. We need the tools developed with our partners. And the main issue we have is that an electric grid is not an entity inside a building; it’s covering a big country. We have many, many units in the field. When we invest, we invest a huge amount of money, so it has to be affordable by the customer, because at the end the customer will pay all the investment on the bill.

But we need to have a solution we can operate for years, for a long time. So we will discuss about standardization, open standard, about also cybersecurity, and so on, to manage all this in the long term.

Christina Cardoza: It’s interesting, because when I hear the term “smart grid” or think of the grid being smart, I think of edge AI and automation and all of these different digital tools and technologies we’ve been talking about. But we also have to be smart in the ways that we approach the grid, which has become very clear just from the beginning of this conversation. And of course edge AI and all this technology is part of it, but it’s not the reason to be becoming smart and to be doing this.

So I want to lay out what the main pillars are for this grid of the future to really be successful, because we talked about resiliency, efficiency, sustainability, flexibility—having a right balance. So, Ian, is there anything that you can share with us about what this all means as we move towards the future of the grid and what those real pillars we should be looking at are?

Ian Fogg: Well, I think, to highlight some of the challenges, I mean, one of the other challenges when you look at, say, EV adoption is often it’s not spread evenly. So you get areas of the grid that have greater pressure from some of these changes than others.

I think the bigger challenge, though, when you think about the grid of the future and you think about making it smarter, is what we need to do is to increase the flexibility of the system so it can respond to these different consumption and generation patterns, but we must also maintain the reliability of the grid at the same time. It’s not acceptable for the grid to become as unreliable as a cellular network; it has to maintain the reliability while also adding increased flexibility. And that has very specific challenges when you start looking at the technology that needs to be deployed to improve that responsiveness, to improve that flexibility, but still keep up the reliability.

Christina Cardoza: That’s great. And, Paul, in the beginning of your introduction you mentioned how Advantech, you guys focus on hardware, and so I’m sure there’s a lot of pressures coming in just looking at that and making sure that you have all the hardware and technology in place. So, as Ian’s talking about some of the challenges, I want to hear from your perspective: What are the limitations that you face from a hardware perspective in making this possible?

Paul O’Shaughnessy: Yeah, that’s a good point. Advantech has a massive portfolio of product; the issue is, have we got the right product? And that’s the real question when you start to talk about some of the challenges and some of the considerations we need to take into account here.

As a hardware manufacturer, for me one of the major challenges when I look at this and I listen to Philippe and I listen to Marc and Valerie talk about and Ian talk about what’s going on and the scale of it, it’s the scale of the challenge that, from a hardware perspective, is a great thing, of course, but it’s also a huge challenge for all of us to try and cope with.

The other aspect that we need to look at as a hardware manufacturer—and indeed the end users and the SIs and all the other ecosystem that are deploying the technology—is the variability of those assets that are deployed. It’s not like it’s a standard asset; there is a huge variability. And that variability requires multiple solutions. And if you think about the E4S Alliance, for instance, where I think all of us are members, we have 13 use cases in that alone within the E4S working groups that we have to try and make sure we can service with an open platform.

So getting that information about the volume and the variability and the definition—so the hardware definition for us is the big challenge, clearly defining what that is. And that’s driven by the use case and the type of assets in the application. And we have to consider things like, we’re a hardware manufacturer, but we have to consider that we need to be able to support various softwares; we have to be able to support legacy protocols and all of the new protocols that are required. Is it real-time deterministic system, virtualization, security? These are all key topics. Security has a hardware element with TPM. Is that something that’s going to be required?

And then working with partners, strategic technology partners like Intel, who is a significant partner for us, on defining the processor roadmaps that we need to be focused on—to leverage that to ensure we’re bringing the right products to market. Storage technology, IO, cooling—all of these are key things that are driven by the variability of the assets in the field and the scale of them.

But also there’s one other big thing, and Valerie alluded to it earlier, which was about compliance to things like 61850-3 and IEEE-1613 and these. So as a hardware manufacturer these are things we have to comply with to be a player in this space. And that is not an inexpensive topic to discuss. It’s a real challenge, particularly if the definition of the product is not really fixed. It can become a very expensive thing to deal with.

Christina Cardoza: Yeah, it is quite intimidating, this journey. You’re talking about the technological challenges, its cultural challenges. And then you have all the pressures from needing more electricity, electrical vehicles, different regulations. And you know, it’s one of those changes and transformations that we can’t ignore. Everybody has to be moving towards this. It’s not that you can continue to do things the way that we’ve been doing it.

So it’s great to hear that you guys have all teamed up with the ES4 Alliance, making sure that we approach this in the right way, that we standardize how we’re going to move so that we can start addressing and partnering together with all of these challenges and limitations. And I’m sure there’s more challenges that we haven’t even crossed yet.

Philippe Vié: The challenges are many. The money, because you have to increase your investment in digital grid and in the grid itself. And the investment approval should be made by stakeholders, shareholders, but also governments and regulators to probably doubling in 2030 the investments that were made in 2020.

Second challenge is the skill scarcity. Smart grid will create probably in the next 20 years, five million of jobs with the digital technology at the core. And there is also the move from electrotechnics to digital. And there is also the Baby Boomer retirement wave and many people to replace. The lack of roadmaps—many utilities are launching one program, the second program, but you need to have a consistent roadmap and to revise it every two or three years, because things are moving very fast.

The permitting when you are building new lines—there is digital content about permitting, and it can take one to six years in main geography to build a new line. And digital engineering is very useful in that direction. And of course people don’t want electric lines in their backyard. Same, then, for windmills or renewables.

And finally there is also the lack of standards. Valerie has mentioned the subsidiary standard, and we need—and this is the purpose of E4S and vPAC—depending on the granularity of substation, to agree all of us—technology providers and grid operators—on common standards to develop interoperable objects, interoperable modules on smart data. Each limitation goes with many solutions that can vary from one country to another, from one electric grid state to another. So big program, a need for roadmap, and many, many challenges to overcome.

Christina Cardoza: One thing I want to touch on that all of you have spoken about is the idea of renewable energy sources providing clean energy to the grid in an effort to help it become more sustainable, resilient, and that overall efficiency that we keep talking about. But of course, renewable energy sources, they also come with their own considerations and challenges when approaching the grid.

So, Ian, I know this was touched upon in that “Grid of the Future” report from CCS Insight. What are the considerations, and how should we be thinking about renewable energy sources when we are moving towards a smarter grid?

Ian Fogg: I think, as I said, storing energy is very difficult. So if you can alter the consumption patterns to better respond and reflect the more variable generation patterns that you get with solar or wind, you don’t need to have as much generation capacity. And that’s why that substation piece is so important, because you put intelligence there, you can help match things up.

One of the examples that we’ve seen is the nature of electricity tariffs is changing. In some consumer spaces we’re seeing half-hourly price points and sometimes even more. And that requires very precise timing, very good technology at the end user, but also in the substation and right the way through. And what’s driving that, a lot of that, is this shift to renewables. I think we saw between 2010 and 2013, across 48 countries, the proportion of electricity generation that was wind rise from 2.6% to 10.8%.

Solar, similar rise from under 1% up to 6.6%. So very significant increase in those. Obviously solar is seasonal in terms of time of year. The further you are from the equator, you get big winter-summer differences. Wind obviously varies based on the weather. We’ve got to have that greater flexibility, and that requires greater communication within the grid-technology systems that are able to marry this thing up. 

Christina Cardoza: I’m just curious, since Enedis is managing electricity-distribution networks and we talked about renewable energy, how do you guys approach renewable energy? Or what is the role in some of your efforts renewable energy is taking?

Marc Delandre: We have to deal with renewable. And tomorrow every customer will become a prosumer. So it means you will have renewable energy for each single dwelling house, on the top of any building everywhere. And the key role of the network will be to balance energy generation and consumption. It will be our core activity tomorrow. So it’s strategic, because without electricity you cannot do everything. It’s strategic, because it’s very important for everyone, and it needs a strong, strong investment.

Christina Cardoza: Yeah, and the great thing is that there have been so many advancements in technology to help get us closer to our goals. It’s funny, we’ve been talking about moving towards a smart grid for years, probably decades now. And what has changed during this conversation is technology has advanced to help us reach some of these goals.

So I want to look at some of those recent technological advancements and how they can help. And then we’ll get into, later down in the conversation, how you can successfully adopt those new technologies. So, Valerie, I’ll start with you. From Schneider’s perspective, what can you tell us about new technology in this space that is helping us go towards a smart grid? 

Valerie Layan: So, new is maybe not new, but new for grid, I would say, because there is a maturity in other segments like telco, health, transportation that have been using some of this technology prior to grid. And this is good, because we will leverage kind of mature technology into make our grid smarter.

So, first of all there is an evolution of ADMS solution to really make what we say we want, to have a grid that is more efficient and more resilient. We have a capacity management, outage management, load and generation control, asset management, power quality that are really here building the reliability of this network and its efficiency.

There is also the need—we mentioned a lot renewable and the prosumer. For me there are two things. There is a generation, but there is a prosumer. Prosumer is very interesting in the scheme of flexibility at the edge, because these are these industry or these commercial buildings, even potentially consumers, that have their own generation. I mean, typically solar rooftop can be an example, but in a bigger consumer-industrial area—a port for example—they could have actually wind as a generation.

And then they are going to potentially reinject, resell that capacity to the grid depending on the price at a certain time versus storage, etcetera. So we have now a solution to integrate this flexibility at the edge. And these are things we see in the evolution to make the grid more efficient and also to monetize that ecosystem.

The second technology, which is not new but that we are also bringing into grid, is digital twin, which can be used for training, for simulation, for remote software updates—typically also for managing the IED firmware as a fleet to upgrade them at a certain point of time all together. So we have—we mentioned compliance. So we have IEC 61850-2 that is helping in terms of security, cybersecurity, openness. It’s the first level of standardization that can help at the substation level.

And the third point is, with all this data at the edge putting pressure on the substation, virtualization is a key. This is why we are all in E4S, because we believe that this virtualization will absorb the level of data that is coming and the pressure that is coming at the edge on the secondary substation. We have the experience from other markets, to name one, telecom, at least. And 61850-3 is under definition. And also in E4S we really want to collaborate to define open-standard reference architecture and common design towards this virtualization.

So I think we have a mix of mature technology coming from other end markets, plus our own—I would say like ADMS technology—that is evolving to take into account for example flexibility or virtualization.

Christina Cardoza: In some of the research, “The Grid the Future,” you talked about what businesses can learn from other transformations happening in other industries. So is there anything from that report that you can share with us?

Ian Fogg: I think virtualization is one of the key ones, where in many industries we’ve seen them move ahead with virtualization, shift more functions from hardware into software, use standardized hardware solutions which give you scalability to upgrade the platform to support higher performance workloads. We’ve seen that in many, many different industries.

I think the other piece that we’ve seen in other sectors is this combination of operation technology and IT, and how OT and IT interact. We’ve seen that, and there’s some examples in the report around that too, around what that interaction is and what the different cultures are around that. And that’s something where there are other industries which are going through exactly that same kind of transformation.

Christina Cardoza: It’s good to hear that we have some of this technology from different industries or technology becoming more mature that is going to help us make these changes. I think digital twinning is very powerful—being able to make that digital representation and see how changes are made before they’re actually implemented. And so, from your perspective, what other advancements or what other technology do you see being used in these efforts? 

Philippe Vié: So I will take different angles, because I agree with everything Valerie has mentioned. Definitely, I will take the angle of automation. For example, we are dreaming of a control room without people, only for critical conditions, but fully automated. We are dreaming of AI enabling self-healing when there is an outage to reconnect the consumers, 99% of the consumers, in one or two minutes. We are dreaming of predictive maintenance, as Paul stated earlier.

We are dreaming of all the AI capabilities. And many utilities are deploying AI at scales with 2030 use cases, which are really beneficial for the grid performance and for avoiding investment. For example, digital, dynamic line lighting to go with more electricity than the nominal capacity of the line.

We are dreaming of asset-investment life-cycle planning. It starts with Accel AI-enabled grid planning. Then you can follow the full cycle of construction up to the deconstruction of the line, of the nodes of the network, the transformers. So many technologies can be enabled today to bring value to the grid operators and finally to the consumers.

Christina Cardoza: Great. And the automation, the data, like you all mentioned, I think it’s also going to be extremely important getting real-time analytics and all of this information so that we can make changes on the fly or we can really get these deep insights.

All of you were mentioning the edge and AI. So, Paul, I know this is something that you also brought up in the beginning. What is the role going to be of edge in AI? Because this has been a big conversation in all different industries. So how is edge AI coming into the energy and the utility space?

Paul O’Shaughnessy: I think if I talk about it from our business perspective, this is the fastest growing part of the business, bar none. It is absolutely exploding. And the number of use cases and applications are growing day by day. Up until now it has been heavily dominated by vision-based applications. We have seen so much of that. But I think, to something that Philippe mentioned which is about real-time monitoring and control, that is something where edge AI has a real opportunity to have massive impact on the grid in terms of real-time decision-making at the node, at the substation, allowing for immediate responses to change grid conditions and improve the stability and reducing downtime.

The other one was predictive maintenance. We see this already heavily deployed in manufacturing environments, where by analyzing the data from existing sensor networks and overlay sensor networks that are being deployed where you have that IT/OT conflict, where rather than trying to fix something that isn’t broken you overlay a new network and put that in place. We’ve seen a lot of that happening, and it has a huge impact in terms of the utilization of edge AI in optimizing the efficiency of manufacturing environments. I see no reason why the same thing can’t apply in the grid.

The other one, and the topic that comes up probably as the most talked about topic within the IoT sector in general, is security. I think AI has a real opportunity here, and AI on the edge has a real opportunity in terms of enhancing the security already in place, both from a cybersecurity perspective but also from a physical security and from a health and safety perspective.

I said at the start that we’ve seen an explosion in the area of vision, and what we see is a lot of vision applications for edge AI where it is protecting workers in dangerous environments and ensuring that the people who are getting access to certain environments are the people qualified to access them. And I think that is something that has an absolute play for the grid.

And then, as I said with the cybersecurity part, being able to monitor behavior and identify unusual behavior at the point on the node is a critical aspect of cybersecurity, and enhancing that with edge AI. They would be the three main areas that as a manufacturer that we see real opportunity for deployment into the grid.

Christina Cardoza: You bring up some great points. As we are adding all of these digital technologies and advancements to the grid, we have to make sure that the theme of the conversation—the reliability, the sustainability, the efficiency—we have to make sure that it’s secure so that it could be able to do all of these different things.

One thing you mentioned and you all mentioned is AI is going to be really good at the substation. A lot of these changes are going to be really good at the substation. And it’s funny, when we first had this conversation years ago about this market, the substation was the best place or the first place to start making these changes. And so we’re still having conversations around changes at the substation. So I wanted to understand what is happening at the substation. Is this still the best place to make these changes or to start some of these efforts? Or have we moved beyond the substation and are focusing on different areas.

Valerie Layan: I would say the substation is still alive. It’s true that some of the others, like the one we mentioned before, asset-performance management for predictive maintenance, digital twin, ADMS, DERMS—I mean all at enterprise level—like any control center, SCADA, etcetera. However, as I also mentioned, the need to integrate renewable farms and prosumer is happening at the edge, and the flexibility needs to happen at the edge.

So for us the secondary substation is alive but has to adapt to the new challenge, and basically virtualization is a part of the answer. So, yes, there is a future for the secondary substation, really due to the pressure that will happen at the edge, both in terms of connection and data.

Marc Delandre: I fully agree. We need real-time monitoring and control; we need edge computing, virtualization, and so on. Cybersecurity is a key point. I would add we need also an evolving solution. We don’t know today what will be the priority use cases at the secondary substation in 5 or 10 years. So we need evolving solutions based on open standards.

Why open standards? Because big DSOs such as Enedis cannot rely on any proprietary solution even if it is a very good one. We have to be able to mix in the field solution. When I say solution, it means hardware or software coming from different providers. So it means certification, it means interoperability. And this has to be defined within the E4S organization.

Ian Fogg: So I think that the substation keeps coming up as a critical point, and that’s because it’s that focal point for local distribution, for data gathering, local control, and protection. And I think the other piece here is that piece of how do we build them, how do we upgrade those to give us longevity?

When we think about AI, which is becoming this incredibly important thing now—if we’d been having this conversation eight years ago we probably wouldn’t have been talking about AI. And in the timescale of a grid eight years is actually not that long. When we look at the technology we’re putting in, the open interfaces that Marc referred to are very important to give us that longevity, give us that ability to continue to upgrade.

Virtualization also gives us capabilities. Because as you put functions into software, what we’ve seen in other industries is that that makes the innovation cycle easier. It gives you a longer runway of innovation that you can upgrade the underlying hardware and upgrade the software functions kind of separately if you need to. And that, I think, is really important for that longevity and that ability that we need to respond to changing consumption and demand patterns.

Christina Cardoza: That’s a great point, and it just highlights the need to future-proof what we were talking about eight years ago. AI wasn’t really on our radar, or some of these technologies weren’t on our radar. So to Marc and everyone’s point—

Ian Fogg: If it was on the radar, we’d call it machine learning; we wouldn’t have called it AI. And certainly Gen AI wasn’t on anyone’s radar.

Christina Cardoza: So this need for future-proofing is even more important. I want to hear from Paul, from a hardware perspective, because it makes it very difficult for someone like Advantech to be able to keep up with all of these changes, to know what changes to bring in, and to allow your customers to be able to take advantage of these changes without having to go through all new investments. So I’m curious, how can companies future-proof? How is Advantech helping with future-proofing efforts when we are talking about the grid of the future?

Paul O’Shaughnessy: Well, I think if you kind of step back from it a little bit from our just purely hardware perspective and to some of the points that have already been raised—being a member of the E4S, being a member of vPAC, and the 450 MHz Alliance, and all these alliances, the one thing that’s really clear is the only way we can achieve what we need to achieve as an ecosystem and future-proofing what we need to do is through modular and scalable architecture. So, design systems with modular components that can be easily swapped out for others and replaced. This allows for incremental change that you need to make to keep your systems up to speed in terms of where they need to be. Standardization and interoperability.

I guess for manufacturers like Advantech this is becoming more and more of a thing that we need to deal with, because given the scale of the challenge that’s in front of the grid right now—and Marc alluded to it—they can’t work with proprietary technology. It has to be open because it has to be multi-vendor. And we totally get that. So standardization and interoperability, allowing the open standards and protocols that are required to be supported and compatibility between different devices and systems is going to be key to future-proofing.

I think there’s another piece that’s really important to this, and that’s because it is the top topic, is robust cybersecurity measures. Implement strong cybersecurity practices at the frontend to protect against evolving threats. And this includes regular updates, threat detection, and response mechanisms of which edge AI should be something that can certainly help in driving that.

But then coming back to it as a manufacturer and our perspective, one of the things that’s often overlooked when we talk to customers and they talk about a particular platform that they want to use, and they’ve defined it and they’ve benchmarked it and whatever, and that’s got to do with component selection and hardware–life cycle management. Before committing to the design we need to complete life cycle audits on the key technology components to ensure the longevity is built in there.

So Advantech, working with strategic partners like Intel, where they actively promote their roadmaps with us and make sure that we are aware of the latest technologies that are form fit and capable of being deployed in this type of environment, is really critical and is something that I think is overlooked. A lot of the time when you talk about modular and scalable architecture, you also have to plan in at the frontend of that some longevity, and the way to do that is to look at some of the key technologies in terms of life-cycle management.

Christina Cardoza: Yeah, and it’s a great point. Having that partnership and collaboration and working with the E4S Alliance is going to really be important to future-proofing. Nothing is going to restrict you faster than having that vendor lock-in. It’s not going to be able to move to take advantage of some of the new changes or it’s going to be really expensive because then you’re going to have to rework and the investments that you’ve already made.

So I wanted to touch a little bit on that, Paul, since you mentioned it: the importance of working in the E4S Alliance and working obviously with partners like Intel. I should mention “insight.tech Talk” and insight.tech as a whole, we are sponsored by Intel. But I think they do a lot of work to be able to have companies and have customers be able to move fast, take advantage of the changes happening, be able to take advantage of the latest and greatest change while still being flexible and scalable. So can you talk a little bit more about the importance of those partnerships and collaboration?

Paul O’Shaughnessy: Sure. I think you mentioned vendor lock-in. I just want to go back to that point for a second. And of course as hardware manufacturers there’s nothing better than being locked in as a vendor, which is terrific. But the reality is the scale of this challenge and the scale of this opportunity requires so many vendors and so many moving parts that it has got to be an open architecture. And we totally get that.

Coming back to the point that you asked me about in relation to partnerships and collaboration, we are working now—along with everybody in this meeting—with a couple of major alliances: the vPAC Alliance—for Virtual Protection Automation and Control—in primary substations, and with the E4S Alliance for edge for substation. And those alliances and the collaboration that goes on within the various working groups—whether that’s hardware, software, go-to-market, cybersecurity, whatever it is—is really important for all of us—for hardware manufacturers, for software vendors, for end users.

And I think that whole ecosystem that sits within each of those alliances, it’s an opportunity for all of us to learn from each other and to understand what are the real requirements. Because it’s in that environment that the real requirements are discussed and that open architecture is being defined, so that for us as a hardware manufacturer, we understand what we need to be thinking about next. Even though the definition of what’s coming out of E4S as a single scalable box to satisfy all of the use cases may not be the perfect answer for many applications, at least there’s an abstraction between the software layer and the hardware layer that allows us to ensure that our hardware will perform in those applications, given the definitions that have been laid out by the alliance, and that’s the critical part.

Without the alliance, without the collaboration, and without the direction from the end users—which is the DSOs in this case—it would be a tough, tough job to achieve some of the things that need to be achieved.

Christina Cardoza: I completely agree. Like we mentioned in the beginning, there’s so many changes happening, so many places to make these changes, and so many different ways to come at it. Valerie mentioned it’s a journey, so having these alliances—it’s a great starting point. It’s a great point to keep us on track and keep us in the same direction. It may not be a one-size-fits-all approach, but it is something to look at and to consider and to help us on this journey.

And Valerie, your main themes throughout the conversation that we have been talking about are that resiliency, efficiency, sustainability, flexibility. So, from your perspective, how are these partnerships and collaborators helping meet those goals and those pillars that Schneider has?

Valerie Layan: Yes. So, of course a lot was already said by Paul and Marc, and I also fully agree and support what they said. What I could add on top of what they said is that traditionally in energy management, energy infrastructure, and grid operation it’s an OT world. What we master very well is OT knowledge.

But here when we speak at the challenge and the new technology that we need to deploy, that comes from IT. So it was very important to build this alliance with players like Intel, like Capgemini, that have a lot of IT knowledge to leverage the best of IT into our OT world and to make this IT and OT work together. And we know that IT has been already leveraged in telco, in automotive, health, for example. So this is why I see a lot of value of the alliance.

Of course, the alliance’s purpose is to get an open standard—we mentioned it before—not to be a slave of some proprietary protocols. So, open standard—a kind of reference architecture and design and common use case, because this is very important also to have a use case which is driven by the user—which is in that case DSO and represented by Marc today.

But then the idea is to bring in that alliance the best of both from OT and IT and transform what used to be a very hardware player—the secondary substation—into something that is going to be virtualized and that will leverage the benefit of the two worlds. So for sure this collaboration is key. I think we are all here because we believe that, and we have a nice journey in front of us.

Christina Cardoza: So, we’ve been talking a lot about the challenges and the technology and all of these different spaces. I want to give our listeners now some customer examples or case studies if we can, that really helps them paint a picture of how important these partnerships and working with companies like you all are in being able to make these ideas and these solutions a reality.

So I want to turn it over to each of you, if you have any customer examples that you can share of how you are helping reshape the grid and helping customers reshape their efforts. Valerie, I’ll start with you on that one.

Valerie Layan: Yeah, sure, sure. I have some generic examples. I cannot name all the customers of course, but I mentioned at the beginning our EcoStruxure architecture IoT architecture, which is really the way we envision the network and the network of the future, with three-layer connected product, edge control, and apps analytics software services layer.

We have been deploying this solution in some grid operations, and we know that we can really reduce the delivery time by more than 50% when having an integrated solution, rather than having to configure at each node, each of the hardware elements, have a really enterprise-level solution to manage the configuration of the network. So that’s one of the values.

I would say that overall also having an intelligent network is helping in the operation and maintenance part of the life cycle, which we have been able to see kind of TCO—so, TotEx: CapEx plus OpEx—reduction by at least 15% by having some intelligent network that leverages predictive maintenance, asset-performance management, etcetera.

Really the last point I would like to take as an example is what we are doing on a very new approach which we call LV grid management. Because of course we mentioned the edge, so at the edge what’s happening is below MV there is LV, and it’s becoming more and more an issue for the DSO because of all these consumers that want to connect back their production to the network.

So LV grid management is really an end-to-end approach from the low-level sensor of the feeder, the protection and the control at the substation up to the ADMS, and we are piloting this with UNA IT. We will announce a bit more details during a meet about that. So, yes, this kind of solution, we have started implementing it, and we see the value both in terms of CapEx optimization and OpEx optimization and end-to-end management of a challenge that we see coming with these evolutions.

Christina Cardoza: Great. Looking forward to some of the news you alluded to that you will be announcing in the near future. Philippe, I’m going to turn it to you next. Any customer examples you can share with us of Capgemini?

Philippe Vié: So we are working for many leading transmission and distribution grid operators on their smart grid journey. There is no one smart grid journey for every player, but various priorities. We are starting with the roadmap. It’s a consulting engagement and typically with, of course, technology skills, IT, and OT, because we are also in engineering and IT integration.

Each of them are launching—the larger players—a smart grid program which will last 10 to 20 years, probably with various priorities and various domains. I will not repeat the domains I was mentioning earlier, but definitely it’s not one program. It’s a collection of large projects: project to replace a control room, project to automate something, a project to make a substation smart and virtualized—these kinds of things. And even when we are going to the network instrumentation, the smart substation, we see that the use cases are not the same.

Of course we have a common core of use cases. But some utilities are focusing for some substations on these use cases, and for some other use cases for some other substations. So, definitely many examples replacing the control room; instrumenting the network, the smart substation with E4S and the vPAC alliances, real-time health-asset management, digital twinning, digital engineering.

When you have to build thousands of kilometers of new lines—which has not happened in the developed world since a long time—digital engineering is definitely something very valuable. Asset investment, life-cycle planning, performance management—so many dimensions. I will not drop the names.

There is only one I can tell: it’s Enedis, for which we are working on many dimensions, notably on data and on substation with Marc. But definitely a different roadmap for each of them and a need to collaborate. I insist on that also, on a common standard with an ecosystem of partners we are all part of these days.

Christina Cardoza: Great. And since you mentioned Enedis and Marc, I’ll turn the question to Marc next, but switch it up a little bit. How are you working with these partners to make some of the changes? Or what is Enedis doing in this space? How does their journey to the smart grid look?

Marc Delandre: First of all, I would like to add some comment after what Philippe said. You have many utilities in the world. Enedis is one of the largest, but we represent only 2% or 3% of the total market. So the market is huge. All the utilities are facing the same problem: strong investment for electrification, renewables, charging points for electric vehicles, and so on. Even if each utility has its own roadmap, but the target is more or less the same.

So we have the same problems. We have to work together to build a common solution—common solution within E4S, with Schneider, Capgemini, Advantech, and so on. We have big players within E4S, and we have all the knowledge to build the right solution for the market, and we have to work together on it. Enedis for sure is not a technology supplier; we are a customer. But we know perfectly or almost perfectly the problem on the grid, how to manage the grid. So I think all together with IT, Capgemini, and so on—all the partners—we are able to define the best solution for the coming years. 

Christina Cardoza: Great. And Paul, what can you tell us about how you’re helping customers, how you help customers? Like Enedis or any other examples you may have to share.

Paul O’Shaughnessy: Yeah, maybe I’ll take a more specific story that I can share with you. It’s been mentioned quite a few times in this discussion about digital twins, and we had a customer, a distribution operator, who had a resilience issue in terms of communication and connectivity to their remote assets. And they had three primary issues to deal with.

One was the geography itself, one was environmental, and the other was geopolitical. So they had some significant challenges in terms of cybersecurity, and then the weather and the terrain. And we worked with a partner and that DNO to come up with a solution that would allow them to use one of our platforms, our software platforms, for device management, which uses digital twin modeling to ensure that those devices that are being deployed on the assets were secure and that the only way any changes could be made in any way was from the network-operation center.

And that was something that we developed over basically a two-year period with them and rolled out to 5,000 assets. And just to put it in perspective, this is something that started pre-Covid. And so when we talk about some of these applications, like digital twins, they sound like they’re really sophisticated things, but they can be quite simple in terms of modeling the piece of hardware or the application that needs to run on that particular piece of hardware. And again, when we talk about things and the way things change, and we’re now talking about virtualization, back then the topic wasn’t virtualization; the topic was containerization—containerization to help improve the security on the hardware.

So those are areas that we’ve worked with in the past, and today we continue to work within the alliances, obviously, to ensure that we are an active participant in the various working groups to ensure that our hardware and whatever solutions that we bring to market with the other ecosystem members in the alliances meet the standard required for the DSOs.

Ian Fogg: One of the things that comes up—and I think we talked about the importance of the communication of data backwards and forwards. And one of them that we have been involved with is around smart metering and around the cellular capability. Because often smart meets using cellular technology to communicate data. The problem with that—there’s two problems.

One is it’s often not that up-to-date data: It’s 15 to 60 minutes. Often the data is spread, which, if you’ve got short-spot pricing and you’re trying to do very smart tariffs, isn’t always quick enough. But the other piece is it’s a hardware solution rather than a virtualized, software-defined solution, which means that there are challenges now with older smart meters that have older cellular radios for network generations that the mobile operators want to switch off. And that speaks to the different pace, the different life cycle of that part of the economy with the smart grid.

And of course having those smart meters is foundational for many of the usage scenarios of shifting consumption patterns. And if we’re having to start upgrading and replacing those early smart meters, that’s a whole pain point, which we shouldn’t really be doing. And I think there’s an interesting dynamic around that. If those radios were software defined, perhaps we could update the software in the radio to respond to newer technology generations without having to do a truck roll or a hardware-replacement cycle.

Christina Cardoza: Thank you for sharing that. Thank you, everybody on this podcast. This has been a great conversation. It occurs to me that we probably, despite us talking for so long, haven’t hit everything, and we’ve only scratched the surface, but we covered so much ground today. I just want to thank each of you for joining us.

And before we go, since this was such a big conversation, I want to turn it back to each of you—just one final thought, a key takeaway that you want listeners to really get out of this conversation.

Ian Fogg: I think the first thing I’d just say is a lot more in the report that we haven’t even touched on here. So definitely have a look at the report. It’s all modular; it’s all modularized to make it easy to digest.

But I think the bigger piece here is that there is transformation happening throughout the grid, and we’re in the middle of a period of quite rapid change. We’re not at the end of it, we’re not the start, we’re right in the middle. And this is something which is strategic for this industry, but this industry is critical for so many other industries. We’ve talked about, I think, about data centers and about the AI there. We’ve talked, I think, about the electrification of other parts of the economy—perhaps steel furnaces using electricity rather than coal for that. I mean, there’s so many different areas this touches on that I think this is fascinating, I’d say. But have a look at the report. Lots more in it.

Christina Cardoza: And of course the report is available in insight.tech, so I’ll make sure to link it in the description for everybody to access it easily.

Philippe Vié: Yes, I would like to speak to the ones who are making the change, meaning the transmission and distribution grid operators. And we are all serving this. And these points are valid for hardware, software, and services suppliers. One, you are the key enablers of energy transition. Without you, without electric grids, no energy transition is possible. Secondly, you need to move fast forward, definitely, to accelerate, and you are on the way to accelerate when we see the investment planning of all the players.

But for that you need to get a consistent roadmap revised every period of time and to secure all benefits of all projects at all steps to monitor and secure these benefits. You need to avoid, when possible, grid expansion by digital technologies. You need to join standards, E4S and other alliances, for effort sharing and for investment reduction.

Paul O’Shaughnessy: Yeah, I would echo what Philippe has said, and just to say that when we talk about our involvement in the grid business, as a hardware manufacturer when I talk internally, I talk about a marathon and not a sprint, because it’s going to take a considerable amount of time to achieve things. But I think we’re now at a point where, to Philippe’s point, we need to accelerate. And the way to accelerate is through direction and priorities.

And the alliances are working to give us some of that, for sure, but I think to really get clear direction and priorities we need way more of the people who run the grid, the DSOs, like Enedis and all the other ones, to come join and help us on the journey. Because even though we have some of the biggest players in Europe in the alliances and some in the world, we need everybody buying in.

So it’s really important, and we need to accelerate for sure. And that’s my objective within Advantech, is to do the same within our product groups and within the business in general as a focus on energy and utilities.

Valerie Layan: First, I would summarize that digital, end-to-end architecture for the grid is the best way to optimize all the investment, CapEx and OpEx, that are going to be in front of them along the life cycle, from the design, simulation, operate, build, operate, and maintain, thanks to all the new software-analytics layers that are available right now.

About, basically, what is key to make that happen—standardization—we mentioned it, but it’s essential. Having a unified open approach is vital for making that modernization journey efficient. Leveraging successes of digitization of other sectors and their insights and their learning is very valuable in this journey. And, finally, collaboration is key. We have to unite the OT companies, IT companies, the end user, the different experiences into one group. E4S is perfect for that as a body to ensure that we take all the knowledge from the different perspectives and build the smarter grid of the future.

Christina Cardoza: Great. And last but not least, Marc from Enedis, what can you leave us with today?

Marc Delandre: It has been very well summarized by my colleagues, and I don’t want to repeat, but to conclude I would say that we have in front of us a long and challenging and amazing journey.

Christina Cardoza: Absolutely. And I can’t wait to see what all of you do in this space—how the E4S Alliance standardized things and just the continuing journey that we are on. So I want to thank each of you again for joining the podcast, as well as all of our listeners for tuning in. It has been quite a conversation, and I would say visit insight.tech, where we continue these conversations, continue to cover the partners here on the podcast. So thank you all again. Until next time, this has been the “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript has been edited by Erin Noble, copy editor.

Sparking AI Innovation in the Developer Community

Businesses use AI to transform their industries, from manufacturing to medicine to education; that’s clear to even the casual observer. But behind all that transformation are developers building solutions and driving innovation. This evolution happens so fast that even professionals and experts have a hard time keeping up. Fortunately, no one needs to master the field of AI development alone; there’s a whole ecosystem of communities and partnerships out there to be leveraged.

Partnering for this discussion are representatives from two companies embedded in the AI and developer communities: Paula Ramos, AI Evangelist from Intel; and Jason Corso, Co-Founder of Voxel51. Jason is also a faculty member in the department of Robotics and EECS at the University of Michigan. They talk about where AI is going and how developers contribute to its advancement, as well as the importance of engaging in the development community—for the benefit of businesses and industry as well as for developers themselves (Video 1).

Video 1. Industry thought leaders from Intel and Voxel51 discuss the importance of fostering engagement among the developer community. (Source: insight.tech) 

How is the AI space evolving and what trends are shaping it?

Jason Corso: In the past few years there have been a few major developments driving the way we all think about AI. The first one is the availability of these large language models that can capture huge token lengths and embed natural human language into the model. That will give us a resource in which we can really interact naturally, too.

We are also seeing language combined with vision as a key future trend. This will come with new compute capabilities, openly available data, and the ability to use these foundational models to really tackle new problems in what at Voxel we call “visual AI.”

Another thing I’d point to is an appreciation for the role that data is playing in the development of various AI/ML models. We’ve built this culture where the model is king. When you take a machine learning course in school, you start out training models with some data set you’ve downloaded or that the professor has given out—most of the focus is on the algorithm. But various leaders in the LLM space have begun to talk about the critical role that data, good data, high-quality data plays in this marriage of model, code, and data to build the AI systems that we’re using.

At Voxel, for example, we focus heavily on the role that data plays and on providing developer tools for engaging with data alongside the models—rather than just expecting a developer to generate up some scripts to visualize that data. Twenty years ago my data sets were dozens of samples or hundreds of samples, right? Now we have data sets that are dozens of millions of samples. So actually managing them and understanding the failure modes and the distribution and so on is very difficult and requires, I believe, new thinking.

What role do developers play in the AI advancements?

Paula Ramos: Developers are looking for their path every day because things are changing so fast. They need to drive innovation in the huge field that is artificial intelligence, so they need to be creative in order to solve problems. Maybe we have the same problems that we had 20 years ago, but there are better tools right now; there are better ways to approach the solutions. We also need to think more about the final user of the application.

“#Developers are looking for their path every day because things are changing so fast. They need to drive innovation in the huge field that is #ArtificialIntelligence.” – Paula Ramos, @Intel via @insightdottech

There are some challenges right now, and I think we still have room to grow in model development, data management, and how we can deploy those models in an easy way. Do you use a cloud system, or do you use an edge solution? The solution always has to be the simplest one possible, and this is the main challenge developers have right now.

Also, something that is really important in this field is the open source community; this is changing the cadence of the AI. When we have these models open to everyone, they can access the data sets and improve those models round by round.

What are the best ways for developers to partner with companies like Intel?

Paula Ramos: We have multiple channels right now and a variety of solutions. For instance, we have hardware accelerators for retraining or for fine-tuning models. We also have solutions that work at the edge.

There are the Edge Reference Kits that developers can access. It’s one way to give a complex problem an easy solution. And there we are trying to show them with tutorials, code, and videos how they can navigate specific verticals: manufacturing, retail, healthcare. Also for LLMs and how to work with multiple models.

Or developers can use OpenVINO to optimize and quantize a model. That means that they can use the same infrastructure that they have—we are not forcing developers to buy specific hardware to run models—and they can optimize and quantize LLMs. OpenVINO also enables developers to easily prove and test these LLMs. They can create pilots and provide examples before moving to the real or final production system.

We have an amazing repository and open source community where developers can test the latest AI trends. If something new came out today, literally in two days that specific model would show up in the OpenVINO notebook repository. You can test there, for example, Llama 3.1, YOLOv10, and the latest AI trends. This is a great tool.

Developers can also access Intel Developer Cloud to test multiple kinds of hardware before buying it. That is really cool. They can access accelerators and the latest AI trends, for example, the AI PC. 

How is Voxel51 engaging with developers?

Jason Corso: Our software is called FiftyOne. It’s basically a visual component as well as a software SDK for doing the work that we’re talking about here, like data and model refinement. But most recently we have this new functionality called Panels. With Panels you can build functionality for the front end without knowing how to write React or JavaScript or anything with UX. You can write it directly in Python and still enhance the GUI functionality.

As a company we are open source driven, but we do actually have dozens of customers that use our commercial-enterprise version of the FiftyOne software—called FiftyOne Teams—and it allows you to develop the same functionality together in teams, in the cloud, or on-prem. We have a pretty broad customer base across manufacturing, security, and automotive.

We closed our Series B funding earlier this year, and actually we’re hiring for machine learning engineer roles, among others—both for core engineering work as well as developer-relations work. We believe in developers so much, so we hire individuals who are fully trained and can write papers and code and so on, but their role is actually building bridges with the community.

How do industry events help developers engage with the wider community?

Jason Corso: Voxel had its first in-person hackathon just before CVPR. It’s one type of engagement where we see developers excited to be engaging with new technology and really trying to work together on new teams to solve a new problem.

That was fun, but I think a key angle for developer events is obviously education. One has to go to developer events or conferences like CVPR to really stay abreast of what’s happening. Last year I taught the course Intro to Computer Vision: In some sense, for three hours a week I was doing this developer event for 300 students to learn about computer vision.

But the AI space is evolving so rapidly that it seems like everyone is in constant information-gathering mode—even faculty members who have been in the field for ages. It’s impossible to stay up to date with everything from cutting-edge research papers all the way to the new APIs and libraries that you have to learn.

So at Voxel what we’ve tried to do is maintain a sort of weekly technical event that really allows the community to stay engaged. Just personally, for example, every Monday at noon Eastern I have open office hours and anyone can sign into them on Zoom. A couple of weeks ago we reviewed someone’s paper, and we went through slides and an actual technical model. But it goes all the way to: “This is my first time thinking about getting into computer vision. What should I be looking at?”

So there’s raw education just about foundational capabilities, but also developer events that really help engagement while staying up to date with what’s happening.

Are there any other available resources that developers should take advantage of?

Jason Corso: As Paula said earlier, being open source is the gateway to fostering innovation. Our software, FiftyOne, is on GitHub, and you can fork it and you can submit PRs. We make releases on the order of every month or two, and every release has some content from our community. We’ve been educated so much about community needs and by community contributions over the past four years since we released it. I just really want to express my thanks to the developer community that we’ve built. It’s such a vibrant and rich environment, and we wouldn’t be where we are today without that community.

There are actual events, but just becoming a part of open source projects is another great way to really get involved in the developer ecosystem for AI.

How does Intel foster community engagement for developers?

Paula Ramos: At Intel, we have been working so hard on that part, and we are creating multiple ways to create this innovation with developers. We have a huge ecosystem where we are trying to touch not just the inference part but also the training part, with anomaly detection, for example.

We have one program called the Innovator Program, where we have multiple developers around the globe testing technology. They can make their own applications and then share them with us. Basically, they create their own repository, then they fork their repository and create new applications. I will be highlighting some of these innovators in my LinkedIn and on my network, so stay tuned.

Another thing that we participate in is the Google Summer of Code, and there we have several developers working with us for three months with different mentors from the OpenVINO team. Last year, one of the students involved with the Google Summer of Code created a paper with their mentors about Anomalib. The paper was submitted to the Visual Anomaly Inspection Workshop at CVPR, and it was accepted.

We are also moving fast in relations with universities, for sure, helping them to create research and research proposals that Intel can support. We are also closing the gap in between industry and academia with conferences.

We have a real intention to work in the open source community, because here the most important thing is developers. Always we are thinking that we need to enable developers to use this hardware in software that we can provide, and developers can accelerate so that they can improve their pipelines and their workloads. That is the main intention.

Related Content

To learn more about AI development, listen to AI Partnerships Drive Developer Innovation. For the latest innovations from Voxel, follow them on X/Twitter at @voxel51, LinkedIn, and GitHub. For the latest innovations from Intel, follow them on X/Twitter at @intel, LinkedIn, and GitHub.

 

 

This article was edited by Erin Noble, copy editor.

Fast-Track Your Industrial Digital Transformation

In a volatile and highly competitive global market, digital transformation is the holy grail for industrial organizations. To gain an edge, manufacturers deploy the latest Industry 4.0 technologies like AI and computer vision to become more agile, reduce operating costs, secure their business, and grow revenue.

But this path toward industrial digital transformation comes with many hurdles: an aging workforce and skills gap, growing cybersecurity risks, OT and IT silos, and the complexities of harnessing widespread data. Business leaders are tasked with figuring out how they can leverage the technology and overcome these challenges—and where they should even begin. Engaging with a partner like global technology solution provider World Wide Technology (WWT)—with its deep expertise in both industrial markets and technologies—is a great place to start.

“We can work with clients to find and identify the quantitative and qualitative digital transformation challenges that are impacting their businesses. We look at the biggest opportunities to mitigate risk, decrease costs, increase margins, and grow revenue,” says Mike Trojecki, Senior Director, AI Practice at WWT.

Industrial #DigitalTransformation is more than just deploying #technologies like #AI and #ComputerVision. You must address internal cultural issues as well. @wwt_inc via @insightdottech

According to Trojecki, industrial digital transformation is more than just deploying technologies like AI and computer vision. You must address internal cultural issues as well. “It’s the age-old question of IT looking at the OT people and thinking that they don’t understand the technology challenges from an IT standpoint,” he explains. “And then you have the OT people who look at IT as an inhibitor to getting their work done.”

With WWT’s 30+ year history in both on-the-floor plant experience and working with IT organizations, the company can not only help companies overcome challenges but also serve as a conduit between IT and OT, helping companies bridge that gap. It may be as simple as an educational undertaking or a full end-to-end POC to take the necessary steps in moving from point A to point B.

Overcome Business Challenges with Industry 4.0 Technologies

When WWT looks at a client’s overall processes, it can map the technologies and solutions that solve key business challenges.

For example, a common dilemma industrial organizations face is keeping workers safe on the plant floor. “When you look at computer vision, it’s not just about defect inspection and detection,” Trojecki says. “It’s also technology for keeping people safe and secure. If someone places themselves in a situation where they could be injured, AI and computer vision can recognize the safety issue and potentially take action to prevent an injury or even a fatality.”

WWT also uses AI and other Industry 4.0 technologies to power digital twinning, which is essential to the manufacturing landscape for use cases from virtually testing products to digitization of a factory, and even running a process through a virtual environment. There’s a lot of value that can be brought in using digital twins, and Trojecki says WWT is doing so with customers in plants, distribution centers, and data centers.

“You can create a digital representation of a new product before it goes into a manufacturing line, and even prove out a process change without having to do so physically,” says Trojecki. “A digital simulation of the physical and process world gives operational and IT teams a better shot at success with less risk and at a lower cost.”

Predictive Maintenance Lowers Operating Costs

Implementing predictive maintenance solutions across plant floors is another example of how WWT helps its clients modernize manufacturing processes and reduce operating costs. Traditionally, predictive maintenance has been the domain of factory line workers with years of experience, with the know-how to detect machine anomalies through abnormal vibrations or noise and interpret pending problems.

As this generation of workers retires, companies are transitioning to a data analytics model that is more cost-effective but requires a whole new set of skills. WWT’s experience in bringing OT and IT together, training a new generation of workers, and deploying complex AI solutions pays off.

For example, WWT worked with a machine manufacturer that needed to minimize equipment downtime. The company’s ultimate goal was to reduce errors and quality deterioration in the manufacturing process. A predictive maintenance solution—using sensor data and computer vision—was deployed to analyze machine health and take action before there was a problem, which helped to reduce overall machine maintenance and support costs.

WWT partnered with software and hardware providers to develop a complete solution that empowers the client to look into the manufacturing process, change the control network, and fix problems on the fly, instead of pulling systems offline. The solution increases machine utilization and saves the company from having to reevaluate an entire manufacturing line. Now the customer can keep machines in service longer, reduce downtime, and lower the cost of getting product out the door.

Close collaboration with solution providers is essential to WWT’s business and customer success. Intel and its extensive partner ecosystem enable WWT to select and implement the right product mix that best solves specific digital transformation challenges.

And it’s a two-way street. “We align with our partners from a technical standpoint and a solution development standpoint,” says Trojecki. “We can’t do it without them. And to be honest, on the other side, they can’t do it without us.”

Industrial Digital Transformation Expertise Leads to Business Success

As company stakeholders look at new opportunities, AI has become the number-one driver in most conversations. “Clients are asking, ‘How is AI going to affect what we’re doing in the plant?’ ‘How will it play in our overall industrial operations?’” says Trojecki. “Businesses need to start somewhere and those that don’t embrace emerging technologies will quickly fall behind.”

Across the globe, WWT is well positioned to help organizations succeed in achieving their transformation goals.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Seamless Digital Collaboration with Smart Boards

Imagine a traditional brainstorming session with a whiteboard. One person writes down ideas on the board as the rest of the team members strategize. A separate individual might take notes to share at the end of the meeting.

The takeaways everyone remembers (and forgets) days later depend on the efficiency of the note-taker. The process becomes even more complicated when colleagues dial in remotely. Most cannot see the whiteboard clearly or hear individual contributions.

Digital collaborations can be effective only when all participants are on the same page, quite literally. A smooth brainstorming session in the hybrid work era needs a fully integrated hardware and software platform so everyone can collaborate easily.

Through an app, people can draw, annotate, and manipulate content directly on the 86-inch touchscreen. The benefit is the tactile nature of the #interactive display. TQ-Group via @insightdottech

Seamless Smart Board Platform

The cannyboard solution, a product developed by TQ-Group, is the comprehensive solution organizations need. A touchscreen smart board display functions as the primary collaborative tool. It leverages a device that practically every participant has: a mobile device. Through an app, people can draw, annotate, and manipulate content directly on the 86-inch touchscreen. The benefit is the tactile nature of the interactive display (Figure 1).

Two people working together creating content on a cannyboard display.
Figure 1. cannyboard is an 86-inch touchscreen smart board display that provides collaboration, content annotation and manipulation capabilities. (Source: cannyboard)

They also can share their screens wirelessly, allowing them to contribute to the session without being tethered to the board.

“Such flexibility is vital in collaborative settings where multiple participants need to share content quickly and effortlessly,” says Sofie Bauer, Head of cannyboard at TQ, a German technology solutions provider.

In addition, the option to connect PCs via HDMI and USB expands the functionality of the cannyboard. “It not only allows users to share their screen but also to control their device from the cannyboard—effectively turning it into an extension of their own device,” Bauer says. The cannyboard stores sessions in the cloud so authorized members can always access the latest work from anywhere.

Digital Collaboration Use Cases

The cannyboard supports applications in pretty much every environment that requires team members working together. The solution facilitates collaboration in a variety of spaces. This includes corporate meeting rooms, conference halls, hotels, training centers, shared office spaces, educational institutions, creative agencies, showrooms, trade fairs, exhibitions, and retail areas.

A smart board can be especially useful in sports and entertainment. The professional volleyball players of TSV Herrsching, for example, use the cannyboard for game preparation. The cannyboard integrates all the tools needed for meetings, analyses, and strategy development, whether participants are in the same room or connecting remotely.

Thomas Ranner, Head Coach of the WWK Volleys, says video analysis through the cannyboard has been instrumental for game prep. Before adopting the cannyboard, the team had to find a location where the lighting was right so that everyone could see the projector well. Video analysis required a whiteboard (for scribbling tactical ideas), a laptop, and a device for each player to display game plans.

“That is too many devices to carry around and it causes the players’ attention to shift back and forth between them. It was quite cumbersome,” says Max Hauser, team CEO and coach.

Today team meetings are completely different. “We don’t have to constantly switch between different devices for video, game plan, and whiteboard, but can run and edit everything in parallel, which makes our meetings much easier,” Hauser says. The cannyboard replaces the projector, laptop, and whiteboard in one device. While the game video plays on the board, teammates can draw or annotate directly on the video or screen and take screenshots at the same time.

The Technology Behind Interactive Displays

cannyboard ensures each session is independent and that no data persists beyond the session’s duration. “By implementing automatic data deletion, temporary storage, and session isolation, cannyboard protects user data and maintains privacy, giving users confidence that their information is secure every time they use the system,” TQ says.

The solution is based on the latest Intel® Core Processors, ensuring “amazing user experience through high computing power and excellent graphics,” Bauer says. Intel wireless technology provides high-speed integration in Wi-Fi infrastructure and supports Bluetooth for easy connection of external devices like wireless keyboards. And high-power efficiency through Intel performance hybrid architecture reduces overall power consumption and operating costs.

The Future of Digital Collaboration

Digital collaboration will become an essential factor in enhancing speed and efficiency in daily work, Bauer predicts.

Especially when working with members in different countries and time zones, digital collaboration saves an immense amount of time otherwise spent traveling. It also reduces confusion by bringing interactions on one screen. “The efficiency gained allows us all time for breaks, quality time with family and friends, which are vital in our fast-paced world,” Bauer says.

But despite its promises, embracing digital collaboration requires an organizational culture that fosters openness and flexibility. Increasing use of data-intensive applications will require high-speed, reliable bandwidth, and superior audio, sound and video quality. “In addition, we need seamless integration with existing systems to maximize the return on investment in new technologies,” Bauer says. Avatars, virtual reality, simultaneous translation, gesture recognition, and voice commands soon will be part of collaboration tools.

Advancements in 5G, edge computing, cloud technologies, and AI will shape the future of collaboration. For now, collaborators everywhere can be relieved that the days of basic whiteboard brainstorms might well be in the past.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

AI Partnerships Drive Developer Innovation

Are you ready to take your AI career to the next level? We dive deep into the world of strategic partnerships, uncovering everything from finding the perfect match to harnessing the power of developer communities. Get ready for insider tips that will help you build the future of AI.

We explore the game-changing potential of AI partnerships—how can businesses and developers come together to create groundbreaking solutions? What’s the secret sauce to a successful collaboration? We also dive into the crucial role that developer communities and events play in driving innovation and building connections.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: Intel and Voxel51

Our guests this episode are Jason Corso, Cofounder and Chief Science Officer at Voxel51, a computer vision and visual AI solution provider; and Paula Ramos, AI Evangelist at Intel. Jason cofounded Voxel in 2016 with a mission to provide developers with open-source software frameworks. The company also offers an enterprise version of its framework to enable multiple users to securely collaborate. Paula joined Intel in 2021 and has worked to build and foster developer communities around Intel AI software.

Podcast Topics

Jason and Paula answer our questions about:

    • 2:11 – The evolving artificial intelligence landscape
    • 6:19 – How developers can keep up with the changes
    • 9:31 – Gaining developer support from large companies
    • 13:53 – Being part of developer communities and events
    • 17:21 – Staying on top of upcoming AI trends
    • 19:37 – Fostering community engagement

Related Content

For the latest innovations from Voxel, follow them on X/Twitter at @voxel51, LinkedIn, and GitHub. For the latest innovations from Intel, follow them on X/Twitter at @intel, LinkedIn, and GitHub.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, edge, AI, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about AI partnerships that spark developer engagement and innovations.

Who better to discuss this with than two companies embedded in the AI and developer communities. Today we’ll be speaking to Paula Ramos from Intel as well as Jason Corso from Voxel51. But as always, before we get started, let’s get to know our guests. Paula, a good friend of the show; for those of us who haven’t heard your previous conversations, what can you tell us about yourself and what you’re doing at Intel?

Paula Ramos: Yes, for sure. Thank you, Christina, for having me here. So, I’m so excited. So I, Paula Ramos, I have a PhD in computer vision and machine learning, and I’m working at Intel as AI Evangelist, working with multiple products and multiple developers around the globe.

Christina Cardoza: Great. And Jason Corso from Voxel51, first-time guest of the podcast. What can you tell us about yourself and Voxel?

Jason Corso: Likewise. Nice to meet you all. Thanks for the invitation, Christina. So, Jason Corso. Yeah, I have a PhD in computer science. I’m a Co-Founder at Voxel. At Voxel we make a software refinery to help you work with your data, your models, various needs, and kind of refine them into production visual AI.

I’m also on the faculty of robotics and EECS at the University of Michigan, where I’ve done research for the last 10 or 15 years in computer vision and machine learning, all at the boundaries between the physical world and what we can do with computational systems these days.

Christina Cardoza: Awesome. So you’ve been in this space for a long time now and have probably seen it evolve even—it feels like every day something new is happening, and it’s evolving even further. So that’s where I wanted to start off the conversation with you, Jason. If you could just talk about what you’re seeing in this space, how it has changed over the last few years, where we are today, and what are the trends shaping where we are.

Jason Corso: Yeah. Indeed, it has changed quite a bit in the last, even the last few years, also the last 20 years. Like when I was doing my PhD, we were looking at things about how you can use computer vision to understand gestures and so on to interact with the computer, and look where we are now, right? 20 years later it’s been quite a wild ride.

So last few years, let’s see. I think there probably are two major developments I would argue in the last couple years that really are driving the way we all think about AI. So the first one is probably pretty obvious, right? Like the availability of these large language models that capture huge token lengths and can embed actually natural human language into the language model that’s there to really give us a resource in which we can interact with rather naturally.

Now, I mean, there are an awful lot of questions around what their limitations are and their capabilities are, but at the same time I think it’d be easy to find lots of different applications, right? I think in the beginning of this year I wrote some quick note on LinkedIn about how I think LMs will evolve in this coming—in 2024, this year. One of those key elements that I thought was that we would see a true revolution in how we think about search—and just information search, information gathering, and all that and so on. And I think we really are beginning to see that.

I think on the other one, though, I’d probably point to an appreciation for the role that data has begun to play or has been playing in the development of various AI ML models. Everyone when you go to school, in grad school, you take your machine learning course, and you go and start training models to recognize digits and so on. You just go quickly download a data set, either it’s from some repository or your professor gives it to you, and most of the focus is on the algorithm.

And so we’ve built this culture of the model is king. But if you really think about what’s happening, even various leaders in the LLM space—to bring back to the first one—have begun to talk about the critical role that data, good data, high-quality data plays in this marriage of model, code, and data to build the AI systems that we’re using.

So I don’t know exactly where that appreciation is going to lead us. At my company, for example, we focus heavily on the role that data plays and providing developer tools for engaging with data alongside their models, rather than just expecting you to gen up some scripts to visualize your data or whatever, right?

But I think it’s good for me, because it’s a long time since when I was—like, 20 years ago my data sets were dozens of samples, hundreds of samples, right? Now we have data sets that are dozens of millions of samples or whatever. So actually managing them and understanding the failure modes and the distribution and so on is very difficult and requires, I think, new thinking.

Christina Cardoza: Yeah, absolutely. And you mentioned the search and information gathering. I’m definitely seeing on the consumer side AI being more prominent in these areas. When I search on Google or anything now, instead of just getting a list of links, an answer from Gemini comes up.

So it’s interesting to see how AI is evolving, but I’m glad you brought up LLMs, the repositories, and algorithms, and this data, and these models, because it’s really the developers that are pushing these advancements forward. A lot of times on “insight.tech,” we’re writing about advancements in manufacturing and retail and education, how businesses are using AI to transform their spaces; but what’s behind these transformations are really developers that are building these solutions that are working with LLMs.

So, Paula, I’m curious from your take, because you work with a lot of developers, you talk to a lot of developers in this space, what has their role been in keeping up with AI? And how can they even continue to compete in this space with all of the advancements and skill sets happening?

Paula Ramos: Yeah, that is a great question. I think that all of the developers are looking for their path every day because the things are changing so fast. But the main things that we need to have in mind as the developers—what kind of challenges we have—is that we need to drive innovation in a huge field that is there: artificial intelligence. So we need to be creative, we need to build intelligence applications, and we need to solve problems.

Maybe we have the same problems that we had 20 years ago, as Jason was mentioning, but we have better tools right now. We have a better way to approach those solutions, but we need to be so creative with those solutions. So, still we have a lot of tools, and we need to think about the final user of the applications.

So I think that there are some challenges right now in still we have room to improve: that is model development, data management, or how we can deploy those models in the easy way. We need to use a cloud system, or we can use an edge solution. We need to think about, independent of that, for sure, the skills that we need to find could be different, but basically having developers programming in different kind of languages, organizing or producing different kind of data sets.

Also something that is really important in this field is the open source community. Open source community is changing the cadence of the AI, because when we have these models open to everyone, they can access those models. So they can access those data sets and improve and improve those models round by round of those data sets, round by round.

So I think that the responsibility that we have as a developers is huge in this new era of AI. For sure, I think roles are in different kind of sectors. We can talk about manufacturing, retail, but more than that is what kind of problem we want to solve today. Could be complex, could be simple, but the solution always will be the simplest as possible, and this is the main challenge that developers have right now.

Christina Cardoza: Yeah, I love how you said we need to drive innovations, we need to create intelligent applications, we need to solve problems. Because developers aren’t in it alone; they don’t have to build it from scratch. They can leverage partners like Intel and Voxel and community members to make some of this happen.

For instance, I love that Intel has the Edge Reference Kits, and sometimes you guys are walking them through how to build a solution and giving them the code to do self-checkout or to build something in manufacturing, and they can just customize it after they learn a little bit more about it and how to do that.

So I’m curious, in what other ways can developers partner with companies like Intel, and how that’s going to benefit them to reach out into these different areas and to ask for help or ask questions and be a part of the Intel community or other open source communities?

Paula Ramos: That’s a great question. So, we have multiple channels right now. As you mentioned, we have the Edge Reference Kits that developers can access. In an easy way they can find a solution—complex problem with an easy solution—where we are trying to show them with tutorials, code, videos how they can navigate that specific vertical: manufacturing, retail, healthcare, LLMs as well, and working with multiple models.

Intel has a variety of solutions. Basically we have solutions—we have hardware accelerators for retraining, for fine-tuning models, but also we have solutions that work at the edge. Or also you can use your laptop—you can use your laptops to work with AI. And we are creating a specific framework that is called OpenVINO, where developers can use OpenVINO to optimize and quantize a model. That means that they can use the same infrastructure that they have, they can use the same computer, and they can run LLMs, optimize and quantize LLMs—INTEGER*4, for example—or they can use the integrated GPU that Intel also provide the processors.

I think that Intel with OpenVINO is enabling developers to easily prove and test these LLMs. And this is just one step behind the real solution, the solution that we want to put in the production systems. So they can create pilots; they can impress the bosses with the tutorials and examples that they can run in their own laptops before moving to the real or the final production system. And Intel has this possibility also. Developers can access Intel Developer Cloud to test multiple hardware before to buy that hardware. That is really cool. And also accessing accelerators and accessing also the latest, for example, AI PC.

So we are provisioning a lot of tools to developers, and also we have—I almost missed that—but we have an amazing repository where developers can test the latest AI trends. So we have OpenVINO notebook’s repository, where if something happened today, literally in two days we will see the notebook with that specific model, for sure. This is for the open source community. So you can test there, for example, Llama 3.1, YOLOv10, and the latest AI trends. And this is a great tool.

And the most important thing is we are not forcing developers to buy specific hardware to run those models as well, so developers can also run these models in the actual hardware that they have. We are also supporting ARM, and we are also supporting a variety of Intel hardware. Also integrated GPUs—that is the most usage, an integrated GPU—that we can see in the world.

Christina Cardoza: Yeah, it’s great that you are making it easy for developers to get started with the equipment or hardware that they have. And a lot of the kits and the challenges we were just talking about and repositories—these are ongoing things that are available to developers at any time. But I’m thinking about—I know recently, which probably feels like forever ago, you were at CVPR, and there was a competition and challenge going there. So that’s more of a one-off, timely challenge that is available sometimes to developers going to these events, having these things happen.

So I’m curious, Jason, because I know the company was also at that event, but there’s other events that you guys host or that you’re at that have these developer challenges. I’m curious, what would you say is the importance of developers going to these events, engaging in these communities, and participating in some of those competitions?

Jason Corso: Yeah, it’s a good point, right? I mean, even just before CVPR, Voxel had our first in-person hackathon, actually in New York City, and it’s that type of engagement where we really see excited developers engaging with new technology and then really trying to work together on new teams of solve a new problem.

That was really fun, but I think one key angle for developer events is obviously education, right? Learning new things. And I think if you take my earlier answer about how AI has evolved and think about a key trend for the future, a key trend that we’re seeing for the future is language combined with vision combined with new compute capabilities and openly available data, and these foundational models to really tackle new problems in what at Voxel we call visual AI.

I think we’re going to see increasing contributions to that effect, but how do you do it? What do you do? One has to go to developer events or other types of conferences like CVPR or whatever, truly, to really stay abreast of what’s happening there. I mean, for me it’s, in some sense, the educational side is very natural, right? I’m a faculty member; I teach. I’m not teaching right now this year, but last year I taught intro to computer vision. So three hours a week I was doing this developer event, in some sense, for 300 students to learn about computer vision.

So I think one thing we’ve learned at Voxel is this AI space is evolving so rapidly that it seems like everyone—even faculty members who’ve been in the field for ages—we’re in constant information-gathering mode. It’s impossible to stay up to date with everything from cutting-edge research papers on one hand, all the way to what are the new APIs and libraries that you have to then, that you have to learn.

And so to do this, at least at Voxel, what we’ve tried to do is maintain a weekly—at least one per week, if not more per week—sort of technical output that in some form of an event, like different formats, that really allows the community to stay engaged. So we have an events calendar at voxelv51.com that we can include in the show notes. I think we have something like two dozen events scheduled between now and the end of the year.

Just personally, for example, every Monday at noon Eastern I maintain these open office hours where anyone can sign into them—they’re on Zoom. We talk everything from—a couple weeks ago we were reviewing someone’s paper, and we went through slides and an actual technical model. All the way to like—oftentimes I get asked, “This is my first time thinking about getting into computer vision. What should I look at first?” Right? So, pretty broad. But we have some hackathons, virtual meetups, and so on. So I think that it’s like raw education just about foundational capabilities, but also these developer events really help engagement just from staying up to date with what’s happening.

Christina Cardoza: That’s great, and that’s awesome that you have those open hours that developers can just join and start to learn. I’m curious, because obviously there are virtual conferences, then there can be conferences in different parts of the world, and it can be tough for developers: they can’t go to all of them or there’s just so many out there, it’s hard to choose from. Is there anything coming up that you want to call out that developers should have on their radar? Or is there anything, any other resources available to them online, that you think that they should take advantage of?

Jason Corso: What Paula was saying earlier, being open source is like the gateway to fostering innovation, right? Like our software at Voxel51 is called FiftyOne. It’s on GitHub. We have the permissive licensing for the open source component of it, which is basically one user, one machine local data. You can fork it. You can submit PRs. We make releases—I think it’s on the order of every one to two months. Every release that we have has some content from our community, and we’ve been educated so much over the last four years since we released it about—from community needs and community contributions.

Most recently we have this new functionality called Panels, which—FiftyOne is basically a visual component as well as a software SDK for doing the work that we’re talking about here, like data and model refinement. But with Panels you can build functionality for the front end without knowing how to write React or JavaScript or anything with UX. You can write it right in Python, and all of a sudden you can still enhance the GUI functionality.

So I think those are great ways—actual events, but also just becoming a part of open source projects is another way to really to get involved in the developer ecosystem for AI.

Christina Cardoza: Yeah, absolutely. And I think it also helps, companies like yourself who have these open source models. You might not have picked up on something that somebody in the developer community picks up on, and they can really be a part of that community and make changes and point things out and contribute to companies and projects like yourself. So it’s always great to be a part of those discussions, see what’s going on, hearing what developers are talking about, as well as some of the ongoing challenges that they’re facing in these spaces.

Paula, I know OpenVINO—there’s a huge GitHub community around there, and you mentioned a little bit of the kits and some other things that Intel offers, but I’m curious, in what other ways does Intel foster that innovation and that community engagement for developers?

Paula Ramos: That is a great question, because we have been working so hard on that part as well. So we have multiple ways. We are creating multiple ways to create this innovation with developers. We have one program, that is the Innovator Program, where we have multiple developers around the globe that they can try, they can test technology. They can make their own applications, and they can share that with us. So just stay tuned, for example, in my LinkedIn or in my network as well: we are highlighting some of these innovators. This is one thing that we have. And basically they create their own repository. They fork their repository, and they create new applications or improve the application with the contribution.

So another thing that we have is Google Summer of Code. We have a program with Google every year where we have multiple proposals, and we have several developers around the globe as well working with us for three months with different mentors in the OpenVINO team. And, for example, you mentioned about CVPR.

So, we worked with Anomalib. There is a library that also we have in the OpenVINO ecosystem, and we have two proposals last year about Anomalib, and one of these proposals the student that was involved in Google Summer of Code and the mentors and the professor as well, they created a paper. The paper was submitted in the workshop of anomaly detection, Visual Anomaly Inspection Workshop at CVPR, and that was accepted. So we are closing also the gap in between industry and the academia with conferences. We are also participating with the students and developers in those conferences through programs as, for example, Google Summer of Code.

But more than that, for sure we are moving so fast also in relations with universities: what kind of things we can work with universities, helping them to create some research and research proposals that Intel also can support.

At CVPR we are sponsoring as well the challenge in this workshop about anomaly detection. We try also to invite developers, and we create a marketing campaign around the challenge to invite developers to participate in that challenge. We received more than 400 participants and more than 100 submissions. That was an amazing and remarkable number around maybe one month and a half that we received, and we can see how the knowledge is moving in using anomaly detection.

For sure, talking about OpenVINO we have multiple things. As I mentioned before, OpenVINO is an open source tool, and we have a repository where we have different kinds of contributions depending on the product. So we have OpenVINO, OpenVINO notebooks. We have OpenVINO build and deploy. In that repository, OpenVINO build and deploy, you can find all the Edge Reference Kits that we have been talking about today. OpenVINO notebooks, you can find the tutorial; and in the OpenVINO repository you can find the API.

So we have a huge ecosystem where we are trying to touch not just the inference part, also the training part with anomaly detection, Anomalib, and also OpenVINO Training Extension. So we have a huge ecosystem that I really want to invite all the developers and all the people that are watching this podcast or listening this to this podcast to visit those repositories, visit the organization, “openvinotoolkit” in GitHub, and you can find all the repositories that I’m talking about.

Christina Cardoza: Absolutely. It’s exciting hearing all of these different resources, all these different ways developers can get started. I’m excited to see, moving forward, what types of solutions and innovations developers continue to build, and I hope they take you guys up on some of these events and meet you—whether that’s in person or virtually. I know sometimes it can be intimidating when you’re getting started in these areas, but having companies like Voxel and Intel support developers, that’s great to see.

And I also saw, Jason, in addition to the virtual office hours, there’s availability to do one-on-one meetings. So if developers feel intimidated somehow or don’t want to ask a question in a group setting, it’s great that you guys are making yourself available to help developers when and where they need it.

So I want to thank you both again for joining us on this podcast. Before we go, if there’s any final thoughts or key takeaways you want to leave developers with as they go on this journey, engage with each other, and engage with yourselves. Jason, I’ll start with you.

Jason Corso: Great. Yeah, thanks very much. So, I mean, first parting thought would be that I think I just want to express my thanks to the developer community that we’ve built over the last four or five years. We wouldn’t be where we are today without the community. It’s such a vibrant and rich environment.

But the second thing is that actually we’re hiring; we’re hiring developers. I mean, actually across the board we are as we grow, after we closed our Series B earlier this year. But for this conversation, machine learning engineer roles, both for core engineering work as well as developer relations work. We believe in developers so much, so we hire individuals that are fully trained and can write papers, can write code, and so on, but their role is actually building bridges with the community.

And then maybe just the last parting remark is that we, as a company we are open source driven, but we do actually have dozens of customers that use our commercial enterprise version that we call FiftyOne Teams. It kind of relaxes that individual user local data work and allows you to develop the same functionality together in teams—in the cloud or on-prem. And we’d love to engage in conversations around FiftyOne Teams as well with your community. We have customers, many of which are in the Fortune 500, but across manufacturing, security, automotive—a pretty broad-base customer base. So, thanks.

Christina Cardoza: Yeah, absolutely love to hear about job openings. It shows this space is growing, this space is becoming important, and some of the innovations and transformations that we talk about on “insight.tech” wouldn’t be possible without developers. So, exciting opportunity for anybody listening to go join the Voxel51 team.

Paula, always love having you on the podcast. Thank you, again. I feel like every conversation there’s something new to talk about, something new happening in the AI space. So, curious what our next conversation will be about. But before we go, are there any final thoughts or key takeaways you want to leave with us?

Paula Ramos: Yes, for sure. So, first of all, thank you. Thank you, Christina, for creating this space to talk about what we have. And thank you also to Voxel51. We have been creating a great relation with Voxel51—different conferences, we try to share some space together.

And this also talks pretty well about that we have the real intention to work in the open source community. So we are open to work with all of you: try to find the best path to developers, because here the most important thing is developers. So, the company for sure is really important. We have a lot of things to learn from the company: what kind of products we can provide, what kind of tools we can provide to developers. And always we are thinking that we need to enable you to use this hardware in software that we can provide and you can accelerate; you can improve your pipelines and your workloads. That is the main intention.

We have right now a lot of things to share with you. So we talk about OpenVINO, Edge Reference Kits, but more things are coming in the future. For example, we have the new AI PC that you can try. We have a new engine in the microprocessor—that is the NPU, Neural Processing Unit—that we can also expedite and accelerate part of the conventional and generative AI, conventional AI, generative AI, process with that small device. This is one of the things that we can talk about in the future, Christina, for sure. Thank you again, and I’m looking forward to connecting with all of you.

Christina Cardoza: Absolutely, and you talked about earlier how some of these innovations or these tools you have available are making it easy for developers to start working no matter what hardware they’re using, and the AI PC just makes it that much easier for the AI development, deployment, performance of your solutions, all that great stuff. So I know Intel has a lot of resources around AI PCs that we’ll make sure to provide to developers as well.

But thank you both again for joining us today. Thank you to Intel and Voxel51 for these great resources and communities you’ve created for developers and spaces for them to get started and get that support. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Addressing the Design Challenges of 5G OpenRAN

The arrival of 5G has captured the attention of industries worldwide, unlocking new possibilities for high-speed connectivity at a massive scale. In sectors like manufacturing and smart cities, for example, 5G enables far-flung facilities to be networked into a unified whole, enabling unprecedented visibility and responsiveness.

But many applications have needs that public 5G networks cannot meet. This is where private 5G networks step into the spotlight. “There is a pressing need for customized infrastructure to fully leverage the capabilities of 5G,” explains Zeljko Loncaric, Market Segment Manager of Infrastructure at congatec, an embedded computer boards and modules provider, pointing out security, real-time reliability, and network flexibility as some of the key requirements.

This growing demand for tailored solutions and the adoption of private 5G networks come at a perfect time, coinciding with the emergence of open standards like OpenRAN. This shift presents a unique opportunity for telecommunications equipment manufacturers (TEMs), who are no longer constrained by markets dominated by a few major players. Instead, OpenRAN’s open interfaces and standards promote vendor diversity—an important strategic focus for TEMs, Loncaric notes.

Opening Up New Possibilities for OpenRAN

Historically, to build 5G solutions that leverage OpenRAN capabilities, TEMs have several hurdles they must overcome. Specifically:

  • Integrating components from various sources while keeping performance high and costs low.
  • Ensuring robust security. This is a particularly pressing concern for TEMs targeting private 5G networks, which often host high-value data.
  • Designing equipment for harsh environments. (The limited range of 5G radios means that equipment is often deployed deep into the field.)
  • Ensuring solutions can scale effectively to meet the demands of diverse deployments.

“There is a pressing need for customized infrastructure to fully leverage the capabilities of #5G.” – Zeljko Loncaric, @congatecAG via @insightdottech

That’s why congatec developed a solution to provide TEMs with a faster path to market. The conga-HPC/sILH platform is designed to pre-integrate the most complex system elements. The solution includes a backhaul connection to the core network, two RF antenna modules, an Intel® Xeon® D processor, a secure Forward Error Correction (FEC) accelerator, and the full FlexRAN software stack.

According to Loncaric, the technology package is suitable for all types of 5G radio access network configurations. With conga-HPC/sILH, TEMs can focus on their core competencies and keep their specific IP in-house, delivering 5G OpenRAN servers with high levels of trust and design security.

The Role of COM-HPC in Building Robust 5G Infrastructure

The heart of the platform is the COM-HPC Server Size D module, which features an Intel Xeon D processor. This combination offers the performance, efficiency, and security features needed for 5G applications. Notably, selected modules support extreme temperature ranges from -40°C to 85°C, enabling OpenRAN servers to be deployed beyond the confines of air-conditioned server rooms.

The modules plug into Intel’s platform carrier board, which provides a robust and flexible foundation for developing 5G infrastructure. For instance, it supports a wide range of interfaces and acceleration technologies, helping TEMs to streamline the design process.

“The carrier board is a highly flexible reference platform that demonstrates the effectiveness of our offering and provides significant support for TEMs. Combined with our COM-HPC Server module, it enables rapid custom builds that require connections and interfaces not typically found in a RAN server,” says Loncaric.

Enabling Security and Flexibility in Private 5G Networks

To overcome the security concerns of 5G, the platform includes Intel® Software Guard Extensions, which enable secure channel setup and communication between 5G control functions. Built-in crypto acceleration reduces the performance impact of full data encryption and enhances the performance of encryption-intensive workloads.

For precise timing, the platform incorporates Synchronous Ethernet (SyncE) and a Digital Phase-Locked Loop (DPLL) oscillator. These technologies are crucial for synchronizing nodes with the 5G infrastructure.

Together, these technologies allow TEMs to significantly reduce their design effort and accelerate time-to-market. The modular nature of the solution also optimizes ROI and sustainability, as systems can be easily scaled and upgraded with a simple module swap. According to Loncaric, this approach can reduce upgrade costs by up to 50% compared to a full system replacement.

Looking Ahead: The Future of Private 5G Networks and OpenRAN

congatec attributes the success of its platform to its partnership with Intel.

“Telecommunications is a really hard market to access—up until around ten years ago, it was more or less impossible,” Loncaric explains. “By partnering with Intel and through initiatives like the O-RAN Alliance, we were able to enter it step by step. Since then, we’ve released several new standards—the latest, based on Intel Xeon D, is a good fit for several niche applications such as campus networks and industrial environments.”

Looking to the future, congatec plans to develop more solutions that will provide TEMs even higher performance. Beyond that, the company intends to continue its focus on open standards and edge computing expertise.

“We believe our commitment to open standards and our extensive experience in edge computing and industrial applications positions us as a key player in 5G technology across multiple market segments. Through continuous innovation and collaboration with industry-leading partners like Intel, we aim to drive the development of next-generation communication networks, ensuring they continue to meet the evolving needs of modern applications,” says Loncaric.

As the 5G market continues to evolve, solutions like the conga-HPC/sILH COM-HPC platform will play a crucial role in enabling TEMs to meet the diverse and rapidly changing demands of 5G OpenRAN deployments. By providing a flexible, integrated, and powerful foundation, this platform empowers TEMs to innovate faster and deliver the next generation of 5G infrastructure.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Modernizing the Factory with the Industrial Edge

Often when you talk about digital transformation and Industry 4.0, the focus is technology. But people are the key to change.

As manufacturers adopt modern technologies, challenges they face usually stem more from the mindset and collaboration of those implementing them rather than from the tools themselves, according to Kelly Switt, Senior Director and Global Head of Intelligent Edge Business Development at Red Hat, provider of enterprise open source software solutions.

Manufacturing Operations Rely on Team Relationships

The reason why manufacturing operations rely so heavily on collaborative and adaptable teams and individuals is because they involve complex processes that require domain expertise, coordination, troubleshooting, and optimization. Shifting from legacy systems to modern, interconnected platforms, for example, requires a corresponding change in mindset.

The technologies and tools implemented within the factory should empower collaboration and productivity by breaking down silos and removing friction between teams.

“Businesses are a formation of people, and how those people operate the business often emulates system design,” explains Switt. “If you have poor collaboration with your IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems—whether it’s a lack of resiliency or the inability to stay on schedule.”

That’s why Red Hat and Intel collaborated on a modern approach to advancing manufacturing operations and teams. The industrial edge platform is a portfolio of enablement technologies, including Red Hat Device Edge, Ansible Automation Platform, and OpenShift. It also features Intel’s cutting-edge hardware and software stack, including Intel® Edge Controls for Industrial, allowing users to create a holistic solution that meets their specific needs.

“If you have poor collaboration with your #IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems.” – Kelly Switt, @RedHat via @insightdottech

Bridging the Gap with Industrial Automation

A key component of the Red Hat industrial edge platform enables automation of previously manual tasks, one of the first steps toward overcoming cultural challenges. Software automation strategies that enable provisioning, configuring, and updating can also provide a common ground for IT and OT teams to collaborate, and free them up for more critical tasks.

“By automating routine tasks, you can free up the capacity of your staff to focus on more critical aspects of modernization,” Switt explains.

The industrial edge platform helps automate tasks, including system development, deployment, management, and maintenance not only on the server compute level but also the device and networking level—allowing for a more autonomous management of infrastructure.

“You can really create a platform-based strategy around how you think about having more autonomous management of the infrastructure that best supports the productivity of your facility,” says Switt.

Once automation is in place, the next step is modernizing the data centers within the factory. These centers tend to house larger, more critical applications that run the manufacturing processes. Modernizing these systems allows for greater agility and faster changes, which are crucial in today’s fast-paced manufacturing environment.

“Modern technology allows you to have applications with more agility, enabling more frequent updates and faster adaptation to changing needs,” Switt explains. “This not only improves productivity but also enhances the collaboration between IT and OT teams.”

The pharmaceutical industry, for example, needs a level of supply chain traceability. Modern technology enables organizations to reduce the time needed to implement changes from six months to a year to just 90 days. This acceleration brings significant value and benefits to management of both the plant or factory and the overall productivity and output of the facility.

In addition, the industrial edge platform delivers a real-time kernel that lowers latency and reduces jitter so applications can run repeatedly with greater reliability.

“Red Hat’s solutions allow you to not only have an autonomous platform but one that is stable, secure, and based on open source so manufacturers can get to an open, interoperable platform with less proprietary hardware,” says Switt.

Future of Manufacturing Enabled by the Industrial Edge

As manufacturers continue to navigate the complexities of Industry 4.0, collaborations like the one between Red Hat and Intel—focused on culture, people, and mindset—is crucial to the success of their efforts.

“Intel is a core collaborator of ours because not only is Intel ubiquitous with running both the public cloud as well as the IT data centers but is, and should continue to be, ubiquitous with running the factory data center or data room facilities,” Switt says.

By breaking down silos, embracing automation, and modernizing infrastructure, manufacturers can unlock the full potential of their operations and pave the way for a more agile, efficient, and innovative future.

“With Red Hat and Intel, we have the technology that enables you to run a better, faster, and more efficient factory. It’s up to manufacturers to decide what their future looks like, how they want to operate, and the level of collaboration and culture change they bring in to do so,” says Switt.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Journey to the Network Edge

The advantages of moving to the network edge are clear: greater speed, enhanced security, and improved user experience. But how does a business actually make that move? What capabilities will best fit the bill and how much should it cost? Is there some kind of Platonic ideal solution out there that a company should search for?

We explore the network edge with CK Chou, Product Manager for IT/OT hardware-solution provider CASwell. He talks about difficulties in transitioning to the network edge, the role of AI there, and how old-school technology can point the way to a valuable solution with just a little creative thinking (Video 1).

Video 1. CASwell’s CK Chou talks about the challenges of moving to the edge and the role of network edge devices on the “insight.tech Talk.” (Source: insight.tech) 

Why are businesses moving to the network edge these days?

If we are talking about edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, so it is perfect for things that need instant reactions, like manufacturing, retail, transportation, financial services, et cetera.

Let me say it in this way: Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly, because every millisecond counts; you cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios, any delays could mean life or death. This is one example where edge computing comes in to handle data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

By processing data on the spot, edge computing helps keep everything running smoothly, even in places where internet connections might be unreliable. In short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure costs, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, eventually making it worth the effort to overcome all those hurdles.

What capabilities of network-edge devices will help with business success?

It is a tricky question. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would be nice to have a rack design that could operate in a harsh environment and handle the right range of temperatures if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is that the cost of this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? The truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application. And that’s what CASwell is all about.

We have a product line that can provide a variety of choices—from basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements so companies can easily find the right edge device without paying for features they don’t really need.

What is the role of AI at the network edge?

Nowadays, AI-model training is done in the cloud, due to its need for massive amounts of data and high computational power. But think about how big an AI data center needs to be. Imagine something the size of a football field filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together and working nonstop on model training.

An AI server like that sounds amazing, but it is too far from our general use cases and not affordable by our customers. Remember: The concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room—unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices like mobile phones and AI-enabled PCs. These devices use cloud-trained models to make smart decisions, provide personalized experiences, and enhance user interaction.

CASwell is right now building a new product line for edge-AI servers. It is designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly. It puts AI directly in the hands of those who need it, right when they need it.

How does CASwell help businesses address their network edge challenges?

We saw a trend where edge environments were becoming more challenging than we initially expected. More end users were looking for solutions that could work in both IT and light OT environments. They wanted to install edge equipment not just in the office—with air conditioning and on clean, organized racks—but also in environments like warehouses, factory floors, or even just in cabinets without proper airflow. 

CASwell decided to develop an entry-level desktop product—the CAF-0121—built around the Intel Atom® processor, which offers a great balance of performance and power efficiency. The CAF-0121 can handle a wider temperature range, up to something like -20º to 60º from the typical 0º to 40º. This small box can also provide 2.5-gig support to fulfill the basic infrastructure connectivity. Plus, it is compact and fanless, with a passive-cooling design, which is suitable for edge computing applications.

Our goal with this new model was to provide OT-grade specs at an IT-friendly price. This means users could cut down on the resources needed to manage their infrastructure and make deployment much simpler. They could use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. The approach for the CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, so it is really an exciting product.

What were some of the challenges with creating CAF-0121?

The technology around the thermoelectric module—we call it TEM—is what we rely on for CAF-0121. TEM is already a proven solution for cooling overheating components; it is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side to cool down. The more current we send through, the bigger the temperature difference we get between the two sides.

People normally use the cooling capability of the TEM, but we had a different idea: Why not leverage both the cooling and heating capabilities to help our edge devices operate in a wider temperature range? The overall concept is that by leveraging the heating capability of the TEM we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level. When the room is getting cold, TEM operates as a heater; when a room is getting hot, TEM operates as a cooler.

With a TEM, we are no longer limited to the operation temperature range of our individual components, allowing us to expand the temperature range of our equipment beyond what the components could typically allow. With the TEM we can push the temperature boundaries and the device can still maintain reliability.

And with this project we have gained some really valuable know-how using an old-school technology as an innovative solution to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, not just stick to the traditional way of doing things.

How does CASwell work with technology partners to make its product line possible?

A solid edge computing device should have just the right processing power, be energy efficient and packed in a compact size, with a variety of connection options, and of course have a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we chose the Intel Atom processor for the CAF-0121 project. With the Atom we can provide the right level of performance and still keep power consumption low. And the Intel LAN controller helps us easily add the support for 2.5-gig Ethernet to this box, ensuring capability with most infrastructure requirements.

The Atom also has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. Whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

If we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. So we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

Among the various players in this market, only Intel offers a one-stop shop for all these features. Intel doesn’t just provide the hardware but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time to market. When you have got the processing power, security features, and even software support all coming from one reliable partner, it certainly streamlines the whole process. It doesn’t just simplify the engineering and development work but also ensures that everything works seamlessly together.

Then the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win both for us and for our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and very efficiently.

In the end, our goal is very simple: We aim to set a new standard of edge computing equipment and provide flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Related Content

To learn more about the network edge, listen to The Network Edge Advantage: Achieving Business Success and read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Reverse Proxy Server Advances AI Cybersecurity

AI models rely on constant streams of data to learn and make inferences. That’s what makes them valuable. It’s also what makes them vulnerable. Because AI models are built on data they are exposed to, they are also susceptible to data that has been corrupted, manipulated, or compromised.

Cyberthreats can come from bad actors that fabricate inferences and inject bias into models to disrupt their performance or operation. The same outcome can be produced by Distributed Denial of Service (DDoS) attacks that overwhelm the platforms that models run on (as well as the model itself). These and other threats can subject models and their sensitive data to IP theft, especially if the surrounding infrastructure is not properly secured.

Unfortunately, the rush to implement AI models has resulted in significant security gaps in AI deployment architectures. As companies integrate AI with more business systems and processes, chief information security officers (CISOs) must work to close these gaps and prevent valuable data and IP from being extracted with every inference.

AI Cybersecurity Dilemma for Performance-Seeking CISOs

On a technical level, there is a simple explanation for lack of security in current-generation AI deployments: performance.

AI model computation is a resource-intensive task and, until very recently, was almost exclusively the domain of compute clusters and super computers. That’s no longer the case, with platforms like the octal-core 4th Gen Intel® Xeon® Scalable Processors that power rack servers like the Dell Technologies PowerEdge R760, which is more than capable of efficiently hosting multiple AI model servers simultaneously (Figure 1).

Picture of Dell rack server
Figure 1. Rack servers like the Dell PowerEdge R760 can host multiple high-performance Intel® OpenVINO toolkit model servers simultaneously. (Source: Dell Technologies)

But whether hosted at the edge or in a data center, AI model servers require most if not all of a platform’s resources. This comes at the expense of functions like security, which is also computationally demanding, almost regardless of the deployment paradigm:

  • Deployment Model 1—Host Processor: Deploying both AI model servers and security like firewalls or encryption/decryption on the same processor pits the workloads in a competition for CPU resources, network bandwidth, and memory. This slows response times, increases latency, and degrades performance.
  • Deployment Model 2—Separate Virtual Machines (VMs): Hosting AI models and security in different VMs on the same host processor can introduce unnecessary overhead, architectural complexity, and ultimately impact system scalability and agility.
  • Deployment Model 3—Same VM: With both workload types hosted in the same VM, model servers and security functions can be exposed to the same vulnerabilities. This can exacerbate data breaches, unauthorized access, and service disruptions.

CISOs need new deployment architectures that provide both performance scalability that AI models need as well as ability to protect sensitive data and IP residing within them.

Proxy for AI Model Security on COTS Hardware

An alternative would be to host AI model servers and security workloads on different systems altogether. This provides sufficient resources to avoid unwanted latency or performance degradation in AI tasks while also offering physical separation between inferences, security operations, and the AI models themselves.

The challenge then becomes physical footprint and cost.

Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100. @F5 via @insightdottech Recognizing the opportunity, F5 Networks, Inc., a global leader in application delivery infrastructure, partnered with Intel and Dell, a leading global OEM company that provides an extensive product portfolio, to develop a solution that addresses the requirements above in a single, commercial-off-the-shelf (COTS) system. Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 (Figure 2).

Image of Intel IPU adapter
Figure 2. The Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 offloads security operations from a host processor, freeing resources for other workloads like AI training and inferencing. (Source: Intel)

The Intel IPU Adapter E2100 is an infrastructure acceleration card that delivers 200 GbE bandwidth, x16 PCIe 4.0 lanes, and built-in cryptographic accelerators that combine with an advanced packet processing pipeline to deliver line-rate security. The card’s standard interfaces allow native integration with servers like the PowerEdge R760, and the IPU equips ample compute and memory to host a reverse proxy server like F5’s NGINIX Plus.

NGINX Plus, built on an open-source web server, can be deployed as a reverse proxy server to intercept and decrypt/encrypt traffic going to and from a destination server. This separation helps mitigate DDoS attacks but also means cryptographic operations can take place somewhere other than the AI model server host.

The F5 Networks NGINX Plus reverse proxy server provides SSL/TLS encryption as well as a security air gap between unauthenticated inferences and Intel® OpenVINO toolkit model servers running on the R760. In addition to operating as a reverse proxy server, NGINX Plus provides enterprise-grade features such as security controls, load balancing, content caching, application monitoring and management, and more.

Streamline AI Model Security. Focus on AI Value.

For all the enthusiasm around AI, there hasn’t been much thought given to potential deployment drawbacks. Any company looking to gain a competitive edge must rapidly integrate and deploy AI solutions in its tech stack. But to avoid buyer’s remorse, it must also be aware of security risks that come with AI adoption.

Running security services on a dedicated IPU not only streamlines deployment of secure AI but also enhances DevSecOps pipelines by creating a distinct separation between AI and security development teams.

Maybe we won’t spend too much time worrying about AI security after all.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Network Edge Advantage: Achieving Business Success

In today’s rapidly evolving technology landscape, businesses increasingly turn to network edge solutions to meet the demands of real-time data processing, enhanced security, and improved user experiences. But deploying these solutions comes with its own set of challenges, including latency issues, bandwidth constraints, and need for robust infrastructure.

This podcast episode explores the world of network edge computing, and the unique challenges businesses face when deploying these advanced solutions. We discuss the critical features of network edge devices and how AI can help drive efficiency. Additionally, we examine the specific challenges and demands industries encounter and how they can overcome them.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: CASwell

Our guest this episode is CK Chou, Product Manager at CASwell, a leading hardware manufacturer for IoT, network, and security apps. CK joined CASwell in 2014 and has since worked to build strong customer relationships by ensuring that CASwell’s solutions meet specific needs and standards.

Podcast Topics

CK answers our questions about:

  • 2:42 – The move to the network edge
  • 6:17 – Network edge devices built for success
  • 11:15 – Moving to AI at the network edge
  • 14:37 – Addressing network edge challenges
  • 17:30 – Overcoming the increased demand
  • 22:37 – Implementing network edge devices
  • 25:32 – Partnering on performance and power

Related Content

To learn more about the network edge, read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re taking on the conversation of the network edge with CK from CASwell. But before we get started, let’s get to know our guest. CK, what can you tell us about yourself and what you do at CASwell?

CK Chou: Hi, Christina; hi, everyone. My name is CK, with over 10 years of experience in CASwell for product management. My main focus has been on serving customers in Europe and the Middle East. Over the years my mission is to build strong relationships with clients across these regions, ensuring that the solutions from CASwell meet their specific needs and standards.

And about CASwell—originally began as a division dedicated to network-security applications. Over time our expertise and focus grew, leading us to branch out and establish ourselves as a standalone company in 2007. Over the years CASwell has placed a strong emphasis on R&D to stay at the forefront of technology and innovation. However, we were not satisfied as only a player for networking, so expanded our business into information and operation applications. I should say that our journey from a small division to an independent company wasn’t just about getting bigger; it was about getting better at what we do.

Nowadays, CASwell is a leading hardware solution provider for IT and OT industry in Taiwan, specializing in design, engineering, and manufacturing of not only networking appliance but also industrial equipment, edge computing device, and advanced edge-AI solutions which can meet the demand for the current, modern applications.

Christina Cardoza: Great, and I’m looking forward to digging into some of that hardware. But before we jump into that, I want to start the conversation trying to understand a little bit more of why companies are moving to the network edge. I like how you said in your introduction: you’re trying to stay at the forefront of technology and innovation and get better at what you do. And I think a lot of businesses are trying to do the same, and they look to CASwell to help them along that journey. But why are they moving to the network edge today, and what challenges are they facing on their journey?

CK Chou: If we are talking about the edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, which means it is perfect for things that need instant reactions like manufacturing, retail, transportation, financial services, and etcetera.

Let me say it in this way. Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly because every millisecond counts, okay? You cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios any delays could mean life or death. This is just an example where edge computing comes in, handling data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

This kind of localized processing is also super important in other areas like health care—which needs instant diagnostic results—machines in a factory detecting problems. By processing data on the spot, edge computing help keep everything running smoothly, even in places where internet connections might be unreliable. So, in short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But from what I hear from some of our customers, moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure cost, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, and eventually making it worth the effort to overcome all those hurdles.

Christina Cardoza: Yeah, absolutely. I can definitely see the need for network edge and edge computing with all the demands of the real-time data processing, like you mentioned—the enhanced security, improving user experiences.

But I feel like a lot of times when we discuss the edge it feels very abstract. We know all of the benefits and why we should be moving there, but how do we move there? Is there a network-edge device, for instance, that is able to help us move to the edge and get all of these benefits? What does that look like?

CK Chou: Challenges that I mentioned earlier make moving to the edge seem expensive and complicated, but if companies can have reliable edge devices integrated, it can provide innovative, dependable, and affordable hardware features to help the companies to overcome these challenges so they can allocate their limited resources and focus more on building and managing their infrastructure, maintaining their data, and improving the security, or training their staff.

That’s why companies need to work closely with the edge-device provider, like CASwell. Our customers can always count on us because we design the right equipment for the right use case and ensure the edge devices are the key for their edge journey and make their transition to the edge smoother and easier. So, at the end of the day, having the right device with the right features are essential, but it’s only with the right partner—like CASwell. We support them from the hardware perspective, allowing companies to focus more on their specialization. Each party plays its own role, enabling companies to truly do more with this in their edge journey.

Christina Cardoza: I know you mentioned obviously it’s important to have the right features and reliable, affordable hardware, and that helps you build and manage infrastructure and maintain that data that’s really important. But can you talk a little bit more about what those features and hardware capabilities look like? When companies are looking for a network-edge device, what type of capabilities are really going to bring them success?

CK Chou: Okay, it is a tricky question for me. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would also be nice to have a rack design that can operate in a harsh environment and handle the right range of temperature if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is the cost for this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? Okay, I can tell you the truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application, and that’s what CASwell is all about.

We have a product line which can provide a variety of choices, from the basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements. So, with the right partner, companies can easily find the right edge device without paying for features they don’t really need.

Moving to the edge computing certainly costs a lot, so we need to do it smart and efficient. The idea is to ensure that every edge player can get exactly what they need to optimize their operations and stay ahead of this game. So, sorry that there’s no certain answer for your question here. In my opinion, if an edge device can offer the right features, right capabilities with an affordable cost for the specific use case, then it’s just a good edge device that we are looking for.

Christina Cardoza: Yeah, absolutely. No, I love that businesses or companies, they don’t necessarily need an all-in-one box. I think so many times the businesses are focused on finding something that is cost effective that tries to meet all their needs, and they sort of lose sight of what their needs actually are and how a device can help them and the benefits in the long run. So, that’s definitely great, and I want to get into how partnerships work with CASwell, as well as the different product lines that you do have a little bit deeper.

But before we get there I’m a little curious, because obviously when we talk about edge today, AI is so closely related to it. AI at the edge is a term that’s going around these days, and so I’m curious what the role here is at the network edge, especially when we’re talking about network-edge devices.

CK Chou: We know that nowadays AI-model training is done in the cloud due to its need for massive amounts of data and high computational power. If you do a quick search online, you’ll find lots of pictures showing how an AI factory or AI data center need to be. Imagine something the size of a football field and filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together working nonstop on model training.

I agree that such an AI server sounds amazing, but this is too far from our general use case and not is able to be afforded by our customers. As we talked about earlier, the concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So, if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room, unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty, deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices, like mobile phones and AI-enabled PCs. These device use cloud-trained models to make smart decisions and provide our personalized experiences and enhance user interaction.

Building on this trend, edge-AI servers are coming into the picture of CASwell by integrating with the general computability; we often use a GPU engine here. This edge server can handle the basic AI calculation on top of our existing hardware. This means faster decision-making and the ability to use AI-driven insights in real time, whether it’s for cybersecurity, small factories, or other edge applications.

CASwell is now building a new product line for edge-AI servers designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly, and it puts AI directly in the hands of those who need it and right when they need it.

Christina Cardoza: So, tell me a little bit more about that product line or the other products that CASwell offers. You mentioned that you have a whole suite of tools to help businesses depending on what their needs are, their demands, and what they’re trying to get. So, how is CASwell helping these businesses address their network-edge challenges and demands?

CK Chou: I can introduce a model, CAF-0121. The CAF-0121 is an interesting entry-level desktop product from CASwell, built around Intel’s new generation Atom® processor, which offers a great balance of performance and power efficiency. This small box also can provide 2.5 gig support to fulfill the basic infrastructure connectivity, plus its compact and fanless passive-cooling design, which is suitable for edge computing applications.

But we can see a trend where the edge environments are becoming more challenging than we initially expected. End users want to install edge equipment not just in the office space with air conditioning or on clean, organized racks, but also in OT environments like a warehouse, factory floors, and even cabinets without proper airflow. The line between IT and OT is becoming more broad, and more users are looking for solutions that can work in both IT and light OT environments.

As a compromise, CASwell decided to develop this CAF-0121 that can handle a wider temperature range from the typical 0º–40º up to something like -20º–60º. Our goal with this new model is to provide OT-grade specs at an IT-friendly price. This means users can cut down on the resources needed to manage their infrastructure and make deployment much simpler. They can use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. So the approach for CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, which is really an exciting product.

Christina Cardoza: Yeah, that’s great that you developed the CAF-0121 to help businesses in all of their needs. It occurs to me as we’re talking about this, the different temperature ranges that they need to meet, the cost ranges, that not only are businesses having challenges, but sometimes it can be challenging for partners like CASwell to create these solutions that meet their demand.

So, I’m just curious if there’s any insight that you can provide when developing this product, if you guys had any challenges to meet all of these demands and how you were able to overcome them?

CK Chou: The technology around the thermoelectric module—we call it TEM—is the one we are relying on for CAF-0121. TEM is already a proven solution for cooling overheating components. It is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These slim devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side cool down. The more current we send through, the bigger the temperature difference we get between the two sides. And of course TEM does not run on its own. It is controlled by a microcontroller and the thermal sensor that monitors the temperature inside the device. The firmware that we have programmed into the microcontroller takes those temperature readings and decides when to turn the TEM on and how much current we should send through.

We have gone through countless trials and adjustment with the firmware settings to ensure our equipment stays in the ideal temperature range. And we also had to watch out about the condensation reaction, because if a TEM cools down too quickly, it can cause moisture to form on the module surface. And if the moisture gets onto the circuit board, it could cause serious damage. So an appropriate liquid isolation solution between moisture and a circuit board is also necessary.

While people are normally using the cooling capability of the TEM, we had a different idea of why not leverage both the cooling and heating capability to help our edge device to operate in a wider temperature range? So the overall concept is that by leveraging the heating capability of the TEM, we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level.

Let me say it in a simple way. When the room is getting cold, TEM operates as a heater; and when a room is getting hot, TEM operates as a cooler. With a TEM, we are no longer limited to the operation temperature range of the individual components we have selected. It helps us bridge the gap, allows us to expand the temperature range of our equipment beyond what the components could typically allow. This means we can push the temperature boundaries by using the TEM and the device can still maintain reliability.

And some people might think, why don’t we just use industrial-grade components that support a wider temperature range and make our life easier? Reality is those wide-temp components can sometimes cost twice as much as standard commercial ones, plus the general chassis designed for this case is usually large and heavy. And then of course the most important reason is if we build our equipment just like everyone else, why would customers choose us over the competition? If that is the case, CAF-0121 would just end up being another costly device with bulky thermal fans designed to support wide temperature ranges, and this is not what we want.

That’s why we have put a lot of effort into studying the characteristics of the TEM more closely and focusing on selecting the right thermal-conductivity materials, fine-tuning our firmware settings, and testing our device in temperature-control chambers day and night. Our goal is to redefine what edge computing hardware can be by offering solutions that are adaptable to various temperature environments, compact and lightweight, and also still being competitively priced.

Christina Cardoza: Yeah, it’s amazing to hear those different wide ranges of temperature environments you were mentioning in cars and refrigerators, so I can see the importance of making sure that it’s consistently reliable and it provides that performance.

So, do you have any customers that have actually been using CAF-0121 and anything you can share with how they’re using it or in what type of solutions it is in?

CK Chou: This box is going to mass production in October this year, which is the next month, and we have already got a few thousand purchase orders from a major European customer focused on cybersecurity applications and planning to use this device in small office, warehouse, and possibly outdoor cabinets for electric-vehicle charging stations that need wider temperature support. This really highlights the advantage of CAF-0121. The customer can use it across both IT and OT applications without needing separate solutions for different operation temperature conditions, and of course saving customers from having to spend extra money.

We also sent samples to around seven to eight potential customers across various industries here, including cybersecurity, SD-WAN, manufacturing, and telecom companies for instant traffic management. The feedback has been fantastic. Everyone loves the competitive price, which makes our device a great deal. And also the compact size is another big win, because it can fit into tight spaces and helps lower our shipping cost. Also, reduces the carbon footprint.

You know, in today’s market, pricing is a huge factor. We need to do cost-effective solutions but cannot compromise on performance and flexibility. So it’s clear that our approach is hitting the mark for customers who need the reliable and scalable edge solutions that don’t break their bank. The excitement we are seeing from these industries really proves that we are on the right track, and CAF-0121 is exactly the kind of solution that can make their needs.

Christina Cardoza: I can definitely see why the solution needs to be smart and compact, but then also fast and reliable, high performance. So, I’m curious how you actually make that happen. And I should mention “insight.tech Talk” and insight.tech as a whole, we are sponsored by Intel, but I know Intel has a lot of processors that make these devices possible, that make them be able to run fast in these different environments and in these small form factors. So, I want to hear a little bit more about how you work with technology partners like Intel in making your product line possible.

CK Chou: As we discussed earlier, a solid edge computing device should have just the right processing power packed in a compact size, a variety of connection options, energy efficient, and of course a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we have chosen the Intel Atom processor for this project. With the Atom we can provide the right level of performance and still keep power consumption low. And also thanks to Intel LAN controller that helps us easily add the support for 2.5 gig Ethernet to this box to ensure the capability with most infrastructure requirements and more. The Atom has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. So, whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

And if we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. With these two guards we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

I can say that only Intel offers a one-stop shop for all these features among the various players in this market. They don’t just provide the hardware, but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time-to-market. When you have got the processing power, security features, and even software support all coming from one reliable partner, Intel, it certainly streamlines the whole process. This not just simplifies the engineering and development work but also ensures everything works seamlessly together.

So, with Intel’s comprehensive support, the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win for both us and our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and efficiently.

Christina Cardoza: Absolutely; that’s great to hear. And I’m sure—we kept talking about in this conversation making things more cost effective, more affordable, so I’m sure being able to leverage the technology expertise or the technology processor and other elements from a partner like Intel, that helps you be able to focus on your sweet spot and not have to build things from scratch and make things more expensive than they need to be. So, great to hear how you’re using all of that different technology.

It’s been a great conversation. You’ve really been able to take a technical topic and make it more digestible and understandable. Unfortunately, we are running out of time, but before we go I just want to throw it back to you one last time, if you have any final thoughts or key takeaways you want to leave our listeners with today.

CK Chou: I started working at CASwell 10 years ago, and things were pretty different back then. At that time most of the processing power was centralized. Companies were all about making their server super powerful, giving them the fast internet connections for gathering all the data from the edge. Servers were packed with multiple features to handle every use case you could imagine.

Times have changed. It’s all about instant processing and real-time AI calculations. Businesses need to make quick decisions right at the source of the data instead of sending everything back to the central server. That’s why edge computing has become such a big deal. It lets companies process data on the spot without any delay.

But when all the network players are shifting toward edge solutions, the real challenge is how do we make our equipment different and better than everyone else? So this project, CAF-0121, we have gained some really valuable know-how using an old-school technology as an innovative thermal solution for edge equipment and tried to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, and not just stick to the traditional way of doing things.

Also, thanks to the support from Intel about their edge solutions, including edge-optimized processors—which build in deep learning–inference capabilities—various LAN options for different connectivity needs; and of course including all the documents for integration, drivers, and firmware support. This collaboration has really helped us push our designs to the next level.

Finally, our goal is very simple: aiming to set a new standard of edge computing equipment and providing flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Christina Cardoza: Well, I can’t wait to see what else CASwell does in this space—as well as the CAF-0121 when that comes—different market solutions that companies are going to be leveraging this for. I invite all of our listeners to visit the CASwell website, contact them, see how they can help you in all of your edge and network-edge needs. As well as visit insight.tech as we continue to cover partners like CASwell and how they’re innovating in this space.

So, I want to thank you again for joining us today, CK, as well as our listeners for tuning in. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.