AI Partnerships Drive Developer Innovation

Are you ready to take your AI career to the next level? We dive deep into the world of strategic partnerships, uncovering everything from finding the perfect match to harnessing the power of developer communities. Get ready for insider tips that will help you build the future of AI.

We explore the game-changing potential of AI partnerships—how can businesses and developers come together to create groundbreaking solutions? What’s the secret sauce to a successful collaboration? We also dive into the crucial role that developer communities and events play in driving innovation and building connections.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guests: Intel and Voxel51

Our guests this episode are Jason Corso, Cofounder and Chief Science Officer at Voxel51, a computer vision and visual AI solution provider; and Paula Ramos, AI Evangelist at Intel. Jason cofounded Voxel in 2016 with a mission to provide developers with open-source software frameworks. The company also offers an enterprise version of its framework to enable multiple users to securely collaborate. Paula joined Intel in 2021 and has worked to build and foster developer communities around Intel AI software.

Podcast Topics

Jason and Paula answer our questions about:

    • 2:11 – The evolving artificial intelligence landscape
    • 6:19 – How developers can keep up with the changes
    • 9:31 – Gaining developer support from large companies
    • 13:53 – Being part of developer communities and events
    • 17:21 – Staying on top of upcoming AI trends
    • 19:37 – Fostering community engagement

Related Content

For the latest innovations from Voxel, follow them on X/Twitter at @voxel51, LinkedIn, and GitHub. For the latest innovations from Intel, follow them on X/Twitter at @intel, LinkedIn, and GitHub.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, edge, AI, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re going to be talking about AI partnerships that spark developer engagement and innovations.

Who better to discuss this with than two companies embedded in the AI and developer communities. Today we’ll be speaking to Paula Ramos from Intel as well as Jason Corso from Voxel51. But as always, before we get started, let’s get to know our guests. Paula, a good friend of the show; for those of us who haven’t heard your previous conversations, what can you tell us about yourself and what you’re doing at Intel?

Paula Ramos: Yes, for sure. Thank you, Christina, for having me here. So, I’m so excited. So I, Paula Ramos, I have a PhD in computer vision and machine learning, and I’m working at Intel as AI Evangelist, working with multiple products and multiple developers around the globe.

Christina Cardoza: Great. And Jason Corso from Voxel51, first-time guest of the podcast. What can you tell us about yourself and Voxel?

Jason Corso: Likewise. Nice to meet you all. Thanks for the invitation, Christina. So, Jason Corso. Yeah, I have a PhD in computer science. I’m a Co-Founder at Voxel. At Voxel we make a software refinery to help you work with your data, your models, various needs, and kind of refine them into production visual AI.

I’m also on the faculty of robotics and EECS at the University of Michigan, where I’ve done research for the last 10 or 15 years in computer vision and machine learning, all at the boundaries between the physical world and what we can do with computational systems these days.

Christina Cardoza: Awesome. So you’ve been in this space for a long time now and have probably seen it evolve even—it feels like every day something new is happening, and it’s evolving even further. So that’s where I wanted to start off the conversation with you, Jason. If you could just talk about what you’re seeing in this space, how it has changed over the last few years, where we are today, and what are the trends shaping where we are.

Jason Corso: Yeah. Indeed, it has changed quite a bit in the last, even the last few years, also the last 20 years. Like when I was doing my PhD, we were looking at things about how you can use computer vision to understand gestures and so on to interact with the computer, and look where we are now, right? 20 years later it’s been quite a wild ride.

So last few years, let’s see. I think there probably are two major developments I would argue in the last couple years that really are driving the way we all think about AI. So the first one is probably pretty obvious, right? Like the availability of these large language models that capture huge token lengths and can embed actually natural human language into the language model that’s there to really give us a resource in which we can interact with rather naturally.

Now, I mean, there are an awful lot of questions around what their limitations are and their capabilities are, but at the same time I think it’d be easy to find lots of different applications, right? I think in the beginning of this year I wrote some quick note on LinkedIn about how I think LMs will evolve in this coming—in 2024, this year. One of those key elements that I thought was that we would see a true revolution in how we think about search—and just information search, information gathering, and all that and so on. And I think we really are beginning to see that.

I think on the other one, though, I’d probably point to an appreciation for the role that data has begun to play or has been playing in the development of various AI ML models. Everyone when you go to school, in grad school, you take your machine learning course, and you go and start training models to recognize digits and so on. You just go quickly download a data set, either it’s from some repository or your professor gives it to you, and most of the focus is on the algorithm.

And so we’ve built this culture of the model is king. But if you really think about what’s happening, even various leaders in the LLM space—to bring back to the first one—have begun to talk about the critical role that data, good data, high-quality data plays in this marriage of model, code, and data to build the AI systems that we’re using.

So I don’t know exactly where that appreciation is going to lead us. At my company, for example, we focus heavily on the role that data plays and providing developer tools for engaging with data alongside their models, rather than just expecting you to gen up some scripts to visualize your data or whatever, right?

But I think it’s good for me, because it’s a long time since when I was—like, 20 years ago my data sets were dozens of samples, hundreds of samples, right? Now we have data sets that are dozens of millions of samples or whatever. So actually managing them and understanding the failure modes and the distribution and so on is very difficult and requires, I think, new thinking.

Christina Cardoza: Yeah, absolutely. And you mentioned the search and information gathering. I’m definitely seeing on the consumer side AI being more prominent in these areas. When I search on Google or anything now, instead of just getting a list of links, an answer from Gemini comes up.

So it’s interesting to see how AI is evolving, but I’m glad you brought up LLMs, the repositories, and algorithms, and this data, and these models, because it’s really the developers that are pushing these advancements forward. A lot of times on “insight.tech,” we’re writing about advancements in manufacturing and retail and education, how businesses are using AI to transform their spaces; but what’s behind these transformations are really developers that are building these solutions that are working with LLMs.

So, Paula, I’m curious from your take, because you work with a lot of developers, you talk to a lot of developers in this space, what has their role been in keeping up with AI? And how can they even continue to compete in this space with all of the advancements and skill sets happening?

Paula Ramos: Yeah, that is a great question. I think that all of the developers are looking for their path every day because the things are changing so fast. But the main things that we need to have in mind as the developers—what kind of challenges we have—is that we need to drive innovation in a huge field that is there: artificial intelligence. So we need to be creative, we need to build intelligence applications, and we need to solve problems.

Maybe we have the same problems that we had 20 years ago, as Jason was mentioning, but we have better tools right now. We have a better way to approach those solutions, but we need to be so creative with those solutions. So, still we have a lot of tools, and we need to think about the final user of the applications.

So I think that there are some challenges right now in still we have room to improve: that is model development, data management, or how we can deploy those models in the easy way. We need to use a cloud system, or we can use an edge solution. We need to think about, independent of that, for sure, the skills that we need to find could be different, but basically having developers programming in different kind of languages, organizing or producing different kind of data sets.

Also something that is really important in this field is the open source community. Open source community is changing the cadence of the AI, because when we have these models open to everyone, they can access those models. So they can access those data sets and improve and improve those models round by round of those data sets, round by round.

So I think that the responsibility that we have as a developers is huge in this new era of AI. For sure, I think roles are in different kind of sectors. We can talk about manufacturing, retail, but more than that is what kind of problem we want to solve today. Could be complex, could be simple, but the solution always will be the simplest as possible, and this is the main challenge that developers have right now.

Christina Cardoza: Yeah, I love how you said we need to drive innovations, we need to create intelligent applications, we need to solve problems. Because developers aren’t in it alone; they don’t have to build it from scratch. They can leverage partners like Intel and Voxel and community members to make some of this happen.

For instance, I love that Intel has the Edge Reference Kits, and sometimes you guys are walking them through how to build a solution and giving them the code to do self-checkout or to build something in manufacturing, and they can just customize it after they learn a little bit more about it and how to do that.

So I’m curious, in what other ways can developers partner with companies like Intel, and how that’s going to benefit them to reach out into these different areas and to ask for help or ask questions and be a part of the Intel community or other open source communities?

Paula Ramos: That’s a great question. So, we have multiple channels right now. As you mentioned, we have the Edge Reference Kits that developers can access. In an easy way they can find a solution—complex problem with an easy solution—where we are trying to show them with tutorials, code, videos how they can navigate that specific vertical: manufacturing, retail, healthcare, LLMs as well, and working with multiple models.

Intel has a variety of solutions. Basically we have solutions—we have hardware accelerators for retraining, for fine-tuning models, but also we have solutions that work at the edge. Or also you can use your laptop—you can use your laptops to work with AI. And we are creating a specific framework that is called OpenVINO, where developers can use OpenVINO to optimize and quantize a model. That means that they can use the same infrastructure that they have, they can use the same computer, and they can run LLMs, optimize and quantize LLMs—INTEGER*4, for example—or they can use the integrated GPU that Intel also provide the processors.

I think that Intel with OpenVINO is enabling developers to easily prove and test these LLMs. And this is just one step behind the real solution, the solution that we want to put in the production systems. So they can create pilots; they can impress the bosses with the tutorials and examples that they can run in their own laptops before moving to the real or the final production system. And Intel has this possibility also. Developers can access Intel Developer Cloud to test multiple hardware before to buy that hardware. That is really cool. And also accessing accelerators and accessing also the latest, for example, AI PC.

So we are provisioning a lot of tools to developers, and also we have—I almost missed that—but we have an amazing repository where developers can test the latest AI trends. So we have OpenVINO notebook’s repository, where if something happened today, literally in two days we will see the notebook with that specific model, for sure. This is for the open source community. So you can test there, for example, Llama 3.1, YOLOv10, and the latest AI trends. And this is a great tool.

And the most important thing is we are not forcing developers to buy specific hardware to run those models as well, so developers can also run these models in the actual hardware that they have. We are also supporting ARM, and we are also supporting a variety of Intel hardware. Also integrated GPUs—that is the most usage, an integrated GPU—that we can see in the world.

Christina Cardoza: Yeah, it’s great that you are making it easy for developers to get started with the equipment or hardware that they have. And a lot of the kits and the challenges we were just talking about and repositories—these are ongoing things that are available to developers at any time. But I’m thinking about—I know recently, which probably feels like forever ago, you were at CVPR, and there was a competition and challenge going there. So that’s more of a one-off, timely challenge that is available sometimes to developers going to these events, having these things happen.

So I’m curious, Jason, because I know the company was also at that event, but there’s other events that you guys host or that you’re at that have these developer challenges. I’m curious, what would you say is the importance of developers going to these events, engaging in these communities, and participating in some of those competitions?

Jason Corso: Yeah, it’s a good point, right? I mean, even just before CVPR, Voxel had our first in-person hackathon, actually in New York City, and it’s that type of engagement where we really see excited developers engaging with new technology and then really trying to work together on new teams of solve a new problem.

That was really fun, but I think one key angle for developer events is obviously education, right? Learning new things. And I think if you take my earlier answer about how AI has evolved and think about a key trend for the future, a key trend that we’re seeing for the future is language combined with vision combined with new compute capabilities and openly available data, and these foundational models to really tackle new problems in what at Voxel we call visual AI.

I think we’re going to see increasing contributions to that effect, but how do you do it? What do you do? One has to go to developer events or other types of conferences like CVPR or whatever, truly, to really stay abreast of what’s happening there. I mean, for me it’s, in some sense, the educational side is very natural, right? I’m a faculty member; I teach. I’m not teaching right now this year, but last year I taught intro to computer vision. So three hours a week I was doing this developer event, in some sense, for 300 students to learn about computer vision.

So I think one thing we’ve learned at Voxel is this AI space is evolving so rapidly that it seems like everyone—even faculty members who’ve been in the field for ages—we’re in constant information-gathering mode. It’s impossible to stay up to date with everything from cutting-edge research papers on one hand, all the way to what are the new APIs and libraries that you have to then, that you have to learn.

And so to do this, at least at Voxel, what we’ve tried to do is maintain a weekly—at least one per week, if not more per week—sort of technical output that in some form of an event, like different formats, that really allows the community to stay engaged. So we have an events calendar at voxelv51.com that we can include in the show notes. I think we have something like two dozen events scheduled between now and the end of the year.

Just personally, for example, every Monday at noon Eastern I maintain these open office hours where anyone can sign into them—they’re on Zoom. We talk everything from—a couple weeks ago we were reviewing someone’s paper, and we went through slides and an actual technical model. All the way to like—oftentimes I get asked, “This is my first time thinking about getting into computer vision. What should I look at first?” Right? So, pretty broad. But we have some hackathons, virtual meetups, and so on. So I think that it’s like raw education just about foundational capabilities, but also these developer events really help engagement just from staying up to date with what’s happening.

Christina Cardoza: That’s great, and that’s awesome that you have those open hours that developers can just join and start to learn. I’m curious, because obviously there are virtual conferences, then there can be conferences in different parts of the world, and it can be tough for developers: they can’t go to all of them or there’s just so many out there, it’s hard to choose from. Is there anything coming up that you want to call out that developers should have on their radar? Or is there anything, any other resources available to them online, that you think that they should take advantage of?

Jason Corso: What Paula was saying earlier, being open source is like the gateway to fostering innovation, right? Like our software at Voxel51 is called FiftyOne. It’s on GitHub. We have the permissive licensing for the open source component of it, which is basically one user, one machine local data. You can fork it. You can submit PRs. We make releases—I think it’s on the order of every one to two months. Every release that we have has some content from our community, and we’ve been educated so much over the last four years since we released it about—from community needs and community contributions.

Most recently we have this new functionality called Panels, which—FiftyOne is basically a visual component as well as a software SDK for doing the work that we’re talking about here, like data and model refinement. But with Panels you can build functionality for the front end without knowing how to write React or JavaScript or anything with UX. You can write it right in Python, and all of a sudden you can still enhance the GUI functionality.

So I think those are great ways—actual events, but also just becoming a part of open source projects is another way to really to get involved in the developer ecosystem for AI.

Christina Cardoza: Yeah, absolutely. And I think it also helps, companies like yourself who have these open source models. You might not have picked up on something that somebody in the developer community picks up on, and they can really be a part of that community and make changes and point things out and contribute to companies and projects like yourself. So it’s always great to be a part of those discussions, see what’s going on, hearing what developers are talking about, as well as some of the ongoing challenges that they’re facing in these spaces.

Paula, I know OpenVINO—there’s a huge GitHub community around there, and you mentioned a little bit of the kits and some other things that Intel offers, but I’m curious, in what other ways does Intel foster that innovation and that community engagement for developers?

Paula Ramos: That is a great question, because we have been working so hard on that part as well. So we have multiple ways. We are creating multiple ways to create this innovation with developers. We have one program, that is the Innovator Program, where we have multiple developers around the globe that they can try, they can test technology. They can make their own applications, and they can share that with us. So just stay tuned, for example, in my LinkedIn or in my network as well: we are highlighting some of these innovators. This is one thing that we have. And basically they create their own repository. They fork their repository, and they create new applications or improve the application with the contribution.

So another thing that we have is Google Summer of Code. We have a program with Google every year where we have multiple proposals, and we have several developers around the globe as well working with us for three months with different mentors in the OpenVINO team. And, for example, you mentioned about CVPR.

So, we worked with Anomalib. There is a library that also we have in the OpenVINO ecosystem, and we have two proposals last year about Anomalib, and one of these proposals the student that was involved in Google Summer of Code and the mentors and the professor as well, they created a paper. The paper was submitted in the workshop of anomaly detection, Visual Anomaly Inspection Workshop at CVPR, and that was accepted. So we are closing also the gap in between industry and the academia with conferences. We are also participating with the students and developers in those conferences through programs as, for example, Google Summer of Code.

But more than that, for sure we are moving so fast also in relations with universities: what kind of things we can work with universities, helping them to create some research and research proposals that Intel also can support.

At CVPR we are sponsoring as well the challenge in this workshop about anomaly detection. We try also to invite developers, and we create a marketing campaign around the challenge to invite developers to participate in that challenge. We received more than 400 participants and more than 100 submissions. That was an amazing and remarkable number around maybe one month and a half that we received, and we can see how the knowledge is moving in using anomaly detection.

For sure, talking about OpenVINO we have multiple things. As I mentioned before, OpenVINO is an open source tool, and we have a repository where we have different kinds of contributions depending on the product. So we have OpenVINO, OpenVINO notebooks. We have OpenVINO build and deploy. In that repository, OpenVINO build and deploy, you can find all the Edge Reference Kits that we have been talking about today. OpenVINO notebooks, you can find the tutorial; and in the OpenVINO repository you can find the API.

So we have a huge ecosystem where we are trying to touch not just the inference part, also the training part with anomaly detection, Anomalib, and also OpenVINO Training Extension. So we have a huge ecosystem that I really want to invite all the developers and all the people that are watching this podcast or listening this to this podcast to visit those repositories, visit the organization, “openvinotoolkit” in GitHub, and you can find all the repositories that I’m talking about.

Christina Cardoza: Absolutely. It’s exciting hearing all of these different resources, all these different ways developers can get started. I’m excited to see, moving forward, what types of solutions and innovations developers continue to build, and I hope they take you guys up on some of these events and meet you—whether that’s in person or virtually. I know sometimes it can be intimidating when you’re getting started in these areas, but having companies like Voxel and Intel support developers, that’s great to see.

And I also saw, Jason, in addition to the virtual office hours, there’s availability to do one-on-one meetings. So if developers feel intimidated somehow or don’t want to ask a question in a group setting, it’s great that you guys are making yourself available to help developers when and where they need it.

So I want to thank you both again for joining us on this podcast. Before we go, if there’s any final thoughts or key takeaways you want to leave developers with as they go on this journey, engage with each other, and engage with yourselves. Jason, I’ll start with you.

Jason Corso: Great. Yeah, thanks very much. So, I mean, first parting thought would be that I think I just want to express my thanks to the developer community that we’ve built over the last four or five years. We wouldn’t be where we are today without the community. It’s such a vibrant and rich environment.

But the second thing is that actually we’re hiring; we’re hiring developers. I mean, actually across the board we are as we grow, after we closed our Series B earlier this year. But for this conversation, machine learning engineer roles, both for core engineering work as well as developer relations work. We believe in developers so much, so we hire individuals that are fully trained and can write papers, can write code, and so on, but their role is actually building bridges with the community.

And then maybe just the last parting remark is that we, as a company we are open source driven, but we do actually have dozens of customers that use our commercial enterprise version that we call FiftyOne Teams. It kind of relaxes that individual user local data work and allows you to develop the same functionality together in teams—in the cloud or on-prem. And we’d love to engage in conversations around FiftyOne Teams as well with your community. We have customers, many of which are in the Fortune 500, but across manufacturing, security, automotive—a pretty broad-base customer base. So, thanks.

Christina Cardoza: Yeah, absolutely love to hear about job openings. It shows this space is growing, this space is becoming important, and some of the innovations and transformations that we talk about on “insight.tech” wouldn’t be possible without developers. So, exciting opportunity for anybody listening to go join the Voxel51 team.

Paula, always love having you on the podcast. Thank you, again. I feel like every conversation there’s something new to talk about, something new happening in the AI space. So, curious what our next conversation will be about. But before we go, are there any final thoughts or key takeaways you want to leave with us?

Paula Ramos: Yes, for sure. So, first of all, thank you. Thank you, Christina, for creating this space to talk about what we have. And thank you also to Voxel51. We have been creating a great relation with Voxel51—different conferences, we try to share some space together.

And this also talks pretty well about that we have the real intention to work in the open source community. So we are open to work with all of you: try to find the best path to developers, because here the most important thing is developers. So, the company for sure is really important. We have a lot of things to learn from the company: what kind of products we can provide, what kind of tools we can provide to developers. And always we are thinking that we need to enable you to use this hardware in software that we can provide and you can accelerate; you can improve your pipelines and your workloads. That is the main intention.

We have right now a lot of things to share with you. So we talk about OpenVINO, Edge Reference Kits, but more things are coming in the future. For example, we have the new AI PC that you can try. We have a new engine in the microprocessor—that is the NPU, Neural Processing Unit—that we can also expedite and accelerate part of the conventional and generative AI, conventional AI, generative AI, process with that small device. This is one of the things that we can talk about in the future, Christina, for sure. Thank you again, and I’m looking forward to connecting with all of you.

Christina Cardoza: Absolutely, and you talked about earlier how some of these innovations or these tools you have available are making it easy for developers to start working no matter what hardware they’re using, and the AI PC just makes it that much easier for the AI development, deployment, performance of your solutions, all that great stuff. So I know Intel has a lot of resources around AI PCs that we’ll make sure to provide to developers as well.

But thank you both again for joining us today. Thank you to Intel and Voxel51 for these great resources and communities you’ve created for developers and spaces for them to get started and get that support. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Addressing the Design Challenges of 5G OpenRAN

The arrival of 5G has captured the attention of industries worldwide, unlocking new possibilities for high-speed connectivity at a massive scale. In sectors like manufacturing and smart cities, for example, 5G enables far-flung facilities to be networked into a unified whole, enabling unprecedented visibility and responsiveness.

But many applications have needs that public 5G networks cannot meet. This is where private 5G networks step into the spotlight. “There is a pressing need for customized infrastructure to fully leverage the capabilities of 5G,” explains Zeljko Loncaric, Market Segment Manager of Infrastructure at congatec, an embedded computer boards and modules provider, pointing out security, real-time reliability, and network flexibility as some of the key requirements.

This growing demand for tailored solutions and the adoption of private 5G networks come at a perfect time, coinciding with the emergence of open standards like OpenRAN. This shift presents a unique opportunity for telecommunications equipment manufacturers (TEMs), who are no longer constrained by markets dominated by a few major players. Instead, OpenRAN’s open interfaces and standards promote vendor diversity—an important strategic focus for TEMs, Loncaric notes.

Opening Up New Possibilities for OpenRAN

Historically, to build 5G solutions that leverage OpenRAN capabilities, TEMs have several hurdles they must overcome. Specifically:

  • Integrating components from various sources while keeping performance high and costs low.
  • Ensuring robust security. This is a particularly pressing concern for TEMs targeting private 5G networks, which often host high-value data.
  • Designing equipment for harsh environments. (The limited range of 5G radios means that equipment is often deployed deep into the field.)
  • Ensuring solutions can scale effectively to meet the demands of diverse deployments.

“There is a pressing need for customized infrastructure to fully leverage the capabilities of #5G.” – Zeljko Loncaric, @congatecAG via @insightdottech

That’s why congatec developed a solution to provide TEMs with a faster path to market. The conga-HPC/sILH platform is designed to pre-integrate the most complex system elements. The solution includes a backhaul connection to the core network, two RF antenna modules, an Intel® Xeon® D processor, a secure Forward Error Correction (FEC) accelerator, and the full FlexRAN software stack.

According to Loncaric, the technology package is suitable for all types of 5G radio access network configurations. With conga-HPC/sILH, TEMs can focus on their core competencies and keep their specific IP in-house, delivering 5G OpenRAN servers with high levels of trust and design security.

The Role of COM-HPC in Building Robust 5G Infrastructure

The heart of the platform is the COM-HPC Server Size D module, which features an Intel Xeon D processor. This combination offers the performance, efficiency, and security features needed for 5G applications. Notably, selected modules support extreme temperature ranges from -40°C to 85°C, enabling OpenRAN servers to be deployed beyond the confines of air-conditioned server rooms.

The modules plug into Intel’s platform carrier board, which provides a robust and flexible foundation for developing 5G infrastructure. For instance, it supports a wide range of interfaces and acceleration technologies, helping TEMs to streamline the design process.

“The carrier board is a highly flexible reference platform that demonstrates the effectiveness of our offering and provides significant support for TEMs. Combined with our COM-HPC Server module, it enables rapid custom builds that require connections and interfaces not typically found in a RAN server,” says Loncaric.

Enabling Security and Flexibility in Private 5G Networks

To overcome the security concerns of 5G, the platform includes Intel® Software Guard Extensions, which enable secure channel setup and communication between 5G control functions. Built-in crypto acceleration reduces the performance impact of full data encryption and enhances the performance of encryption-intensive workloads.

For precise timing, the platform incorporates Synchronous Ethernet (SyncE) and a Digital Phase-Locked Loop (DPLL) oscillator. These technologies are crucial for synchronizing nodes with the 5G infrastructure.

Together, these technologies allow TEMs to significantly reduce their design effort and accelerate time-to-market. The modular nature of the solution also optimizes ROI and sustainability, as systems can be easily scaled and upgraded with a simple module swap. According to Loncaric, this approach can reduce upgrade costs by up to 50% compared to a full system replacement.

Looking Ahead: The Future of Private 5G Networks and OpenRAN

congatec attributes the success of its platform to its partnership with Intel.

“Telecommunications is a really hard market to access—up until around ten years ago, it was more or less impossible,” Loncaric explains. “By partnering with Intel and through initiatives like the O-RAN Alliance, we were able to enter it step by step. Since then, we’ve released several new standards—the latest, based on Intel Xeon D, is a good fit for several niche applications such as campus networks and industrial environments.”

Looking to the future, congatec plans to develop more solutions that will provide TEMs even higher performance. Beyond that, the company intends to continue its focus on open standards and edge computing expertise.

“We believe our commitment to open standards and our extensive experience in edge computing and industrial applications positions us as a key player in 5G technology across multiple market segments. Through continuous innovation and collaboration with industry-leading partners like Intel, we aim to drive the development of next-generation communication networks, ensuring they continue to meet the evolving needs of modern applications,” says Loncaric.

As the 5G market continues to evolve, solutions like the conga-HPC/sILH COM-HPC platform will play a crucial role in enabling TEMs to meet the diverse and rapidly changing demands of 5G OpenRAN deployments. By providing a flexible, integrated, and powerful foundation, this platform empowers TEMs to innovate faster and deliver the next generation of 5G infrastructure.

 

This article was edited by Christina Cardoza, Editorial Director for insight.tech.

Modernizing the Factory with the Industrial Edge

Often when you talk about digital transformation and Industry 4.0, the focus is technology. But people are the key to change.

As manufacturers adopt modern technologies, challenges they face usually stem more from the mindset and collaboration of those implementing them rather than from the tools themselves, according to Kelly Switt, Senior Director and Global Head of Intelligent Edge Business Development at Red Hat, provider of enterprise open source software solutions.

Manufacturing Operations Rely on Team Relationships

The reason why manufacturing operations rely so heavily on collaborative and adaptable teams and individuals is because they involve complex processes that require domain expertise, coordination, troubleshooting, and optimization. Shifting from legacy systems to modern, interconnected platforms, for example, requires a corresponding change in mindset.

The technologies and tools implemented within the factory should empower collaboration and productivity by breaking down silos and removing friction between teams.

“Businesses are a formation of people, and how those people operate the business often emulates system design,” explains Switt. “If you have poor collaboration with your IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems—whether it’s a lack of resiliency or the inability to stay on schedule.”

That’s why Red Hat and Intel collaborated on a modern approach to advancing manufacturing operations and teams. The industrial edge platform is a portfolio of enablement technologies, including Red Hat Device Edge, Ansible Automation Platform, and OpenShift. It also features Intel’s cutting-edge hardware and software stack, including Intel® Edge Controls for Industrial, allowing users to create a holistic solution that meets their specific needs.

“If you have poor collaboration with your #IT counterparts or still experience siloed friction in the relationship, it will manifest in your systems.” – Kelly Switt, @RedHat via @insightdottech

Bridging the Gap with Industrial Automation

A key component of the Red Hat industrial edge platform enables automation of previously manual tasks, one of the first steps toward overcoming cultural challenges. Software automation strategies that enable provisioning, configuring, and updating can also provide a common ground for IT and OT teams to collaborate, and free them up for more critical tasks.

“By automating routine tasks, you can free up the capacity of your staff to focus on more critical aspects of modernization,” Switt explains.

The industrial edge platform helps automate tasks, including system development, deployment, management, and maintenance not only on the server compute level but also the device and networking level—allowing for a more autonomous management of infrastructure.

“You can really create a platform-based strategy around how you think about having more autonomous management of the infrastructure that best supports the productivity of your facility,” says Switt.

Once automation is in place, the next step is modernizing the data centers within the factory. These centers tend to house larger, more critical applications that run the manufacturing processes. Modernizing these systems allows for greater agility and faster changes, which are crucial in today’s fast-paced manufacturing environment.

“Modern technology allows you to have applications with more agility, enabling more frequent updates and faster adaptation to changing needs,” Switt explains. “This not only improves productivity but also enhances the collaboration between IT and OT teams.”

The pharmaceutical industry, for example, needs a level of supply chain traceability. Modern technology enables organizations to reduce the time needed to implement changes from six months to a year to just 90 days. This acceleration brings significant value and benefits to management of both the plant or factory and the overall productivity and output of the facility.

In addition, the industrial edge platform delivers a real-time kernel that lowers latency and reduces jitter so applications can run repeatedly with greater reliability.

“Red Hat’s solutions allow you to not only have an autonomous platform but one that is stable, secure, and based on open source so manufacturers can get to an open, interoperable platform with less proprietary hardware,” says Switt.

Future of Manufacturing Enabled by the Industrial Edge

As manufacturers continue to navigate the complexities of Industry 4.0, collaborations like the one between Red Hat and Intel—focused on culture, people, and mindset—is crucial to the success of their efforts.

“Intel is a core collaborator of ours because not only is Intel ubiquitous with running both the public cloud as well as the IT data centers but is, and should continue to be, ubiquitous with running the factory data center or data room facilities,” Switt says.

By breaking down silos, embracing automation, and modernizing infrastructure, manufacturers can unlock the full potential of their operations and pave the way for a more agile, efficient, and innovative future.

“With Red Hat and Intel, we have the technology that enables you to run a better, faster, and more efficient factory. It’s up to manufacturers to decide what their future looks like, how they want to operate, and the level of collaboration and culture change they bring in to do so,” says Switt.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Journey to the Network Edge

The advantages of moving to the network edge are clear: greater speed, enhanced security, and improved user experience. But how does a business actually make that move? What capabilities will best fit the bill and how much should it cost? Is there some kind of Platonic ideal solution out there that a company should search for?

We explore the network edge with CK Chou, Product Manager for IT/OT hardware-solution provider CASwell. He talks about difficulties in transitioning to the network edge, the role of AI there, and how old-school technology can point the way to a valuable solution with just a little creative thinking (Video 1).

Video 1. CASwell’s CK Chou talks about the challenges of moving to the edge and the role of network edge devices on the “insight.tech Talk.” (Source: insight.tech) 

Why are businesses moving to the network edge these days?

If we are talking about edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, so it is perfect for things that need instant reactions, like manufacturing, retail, transportation, financial services, et cetera.

Let me say it in this way: Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly, because every millisecond counts; you cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios, any delays could mean life or death. This is one example where edge computing comes in to handle data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

By processing data on the spot, edge computing helps keep everything running smoothly, even in places where internet connections might be unreliable. In short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure costs, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, eventually making it worth the effort to overcome all those hurdles.

What capabilities of network-edge devices will help with business success?

It is a tricky question. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would be nice to have a rack design that could operate in a harsh environment and handle the right range of temperatures if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is that the cost of this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? The truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application. And that’s what CASwell is all about.

We have a product line that can provide a variety of choices—from basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements so companies can easily find the right edge device without paying for features they don’t really need.

What is the role of AI at the network edge?

Nowadays, AI-model training is done in the cloud, due to its need for massive amounts of data and high computational power. But think about how big an AI data center needs to be. Imagine something the size of a football field filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together and working nonstop on model training.

An AI server like that sounds amazing, but it is too far from our general use cases and not affordable by our customers. Remember: The concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room—unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices like mobile phones and AI-enabled PCs. These devices use cloud-trained models to make smart decisions, provide personalized experiences, and enhance user interaction.

CASwell is right now building a new product line for edge-AI servers. It is designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly. It puts AI directly in the hands of those who need it, right when they need it.

How does CASwell help businesses address their network edge challenges?

We saw a trend where edge environments were becoming more challenging than we initially expected. More end users were looking for solutions that could work in both IT and light OT environments. They wanted to install edge equipment not just in the office—with air conditioning and on clean, organized racks—but also in environments like warehouses, factory floors, or even just in cabinets without proper airflow. 

CASwell decided to develop an entry-level desktop product—the CAF-0121—built around the Intel Atom® processor, which offers a great balance of performance and power efficiency. The CAF-0121 can handle a wider temperature range, up to something like -20º to 60º from the typical 0º to 40º. This small box can also provide 2.5-gig support to fulfill the basic infrastructure connectivity. Plus, it is compact and fanless, with a passive-cooling design, which is suitable for edge computing applications.

Our goal with this new model was to provide OT-grade specs at an IT-friendly price. This means users could cut down on the resources needed to manage their infrastructure and make deployment much simpler. They could use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. The approach for the CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, so it is really an exciting product.

What were some of the challenges with creating CAF-0121?

The technology around the thermoelectric module—we call it TEM—is what we rely on for CAF-0121. TEM is already a proven solution for cooling overheating components; it is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side to cool down. The more current we send through, the bigger the temperature difference we get between the two sides.

People normally use the cooling capability of the TEM, but we had a different idea: Why not leverage both the cooling and heating capabilities to help our edge devices operate in a wider temperature range? The overall concept is that by leveraging the heating capability of the TEM we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level. When the room is getting cold, TEM operates as a heater; when a room is getting hot, TEM operates as a cooler.

With a TEM, we are no longer limited to the operation temperature range of our individual components, allowing us to expand the temperature range of our equipment beyond what the components could typically allow. With the TEM we can push the temperature boundaries and the device can still maintain reliability.

And with this project we have gained some really valuable know-how using an old-school technology as an innovative solution to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, not just stick to the traditional way of doing things.

How does CASwell work with technology partners to make its product line possible?

A solid edge computing device should have just the right processing power, be energy efficient and packed in a compact size, with a variety of connection options, and of course have a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we chose the Intel Atom processor for the CAF-0121 project. With the Atom we can provide the right level of performance and still keep power consumption low. And the Intel LAN controller helps us easily add the support for 2.5-gig Ethernet to this box, ensuring capability with most infrastructure requirements.

The Atom also has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. Whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

If we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. So we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

Among the various players in this market, only Intel offers a one-stop shop for all these features. Intel doesn’t just provide the hardware but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time to market. When you have got the processing power, security features, and even software support all coming from one reliable partner, it certainly streamlines the whole process. It doesn’t just simplify the engineering and development work but also ensures that everything works seamlessly together.

Then the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win both for us and for our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and very efficiently.

In the end, our goal is very simple: We aim to set a new standard of edge computing equipment and provide flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Related Content

To learn more about the network edge, listen to The Network Edge Advantage: Achieving Business Success and read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

 

This article was edited by Erin Noble, copy editor.

Reverse Proxy Server Advances AI Cybersecurity

AI models rely on constant streams of data to learn and make inferences. That’s what makes them valuable. It’s also what makes them vulnerable. Because AI models are built on data they are exposed to, they are also susceptible to data that has been corrupted, manipulated, or compromised.

Cyberthreats can come from bad actors that fabricate inferences and inject bias into models to disrupt their performance or operation. The same outcome can be produced by Distributed Denial of Service (DDoS) attacks that overwhelm the platforms that models run on (as well as the model itself). These and other threats can subject models and their sensitive data to IP theft, especially if the surrounding infrastructure is not properly secured.

Unfortunately, the rush to implement AI models has resulted in significant security gaps in AI deployment architectures. As companies integrate AI with more business systems and processes, chief information security officers (CISOs) must work to close these gaps and prevent valuable data and IP from being extracted with every inference.

AI Cybersecurity Dilemma for Performance-Seeking CISOs

On a technical level, there is a simple explanation for lack of security in current-generation AI deployments: performance.

AI model computation is a resource-intensive task and, until very recently, was almost exclusively the domain of compute clusters and super computers. That’s no longer the case, with platforms like the octal-core 4th Gen Intel® Xeon® Scalable Processors that power rack servers like the Dell Technologies PowerEdge R760, which is more than capable of efficiently hosting multiple AI model servers simultaneously (Figure 1).

Picture of Dell rack server
Figure 1. Rack servers like the Dell PowerEdge R760 can host multiple high-performance Intel® OpenVINO toolkit model servers simultaneously. (Source: Dell Technologies)

But whether hosted at the edge or in a data center, AI model servers require most if not all of a platform’s resources. This comes at the expense of functions like security, which is also computationally demanding, almost regardless of the deployment paradigm:

  • Deployment Model 1—Host Processor: Deploying both AI model servers and security like firewalls or encryption/decryption on the same processor pits the workloads in a competition for CPU resources, network bandwidth, and memory. This slows response times, increases latency, and degrades performance.
  • Deployment Model 2—Separate Virtual Machines (VMs): Hosting AI models and security in different VMs on the same host processor can introduce unnecessary overhead, architectural complexity, and ultimately impact system scalability and agility.
  • Deployment Model 3—Same VM: With both workload types hosted in the same VM, model servers and security functions can be exposed to the same vulnerabilities. This can exacerbate data breaches, unauthorized access, and service disruptions.

CISOs need new deployment architectures that provide both performance scalability that AI models need as well as ability to protect sensitive data and IP residing within them.

Proxy for AI Model Security on COTS Hardware

An alternative would be to host AI model servers and security workloads on different systems altogether. This provides sufficient resources to avoid unwanted latency or performance degradation in AI tasks while also offering physical separation between inferences, security operations, and the AI models themselves.

The challenge then becomes physical footprint and cost.

Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100. @F5 via @insightdottech Recognizing the opportunity, F5 Networks, Inc., a global leader in application delivery infrastructure, partnered with Intel and Dell, a leading global OEM company that provides an extensive product portfolio, to develop a solution that addresses the requirements above in a single, commercial-off-the-shelf (COTS) system. Building on a Dell PowerEdge R760 Rack Server featuring a 4th Gen Intel Xeon Scalable Processor, F5 integrated an Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 (Figure 2).

Image of Intel IPU adapter
Figure 2. The Intel® Infrastructure Processing Unit (Intel® IPU) Adapter E2100 offloads security operations from a host processor, freeing resources for other workloads like AI training and inferencing. (Source: Intel)

The Intel IPU Adapter E2100 is an infrastructure acceleration card that delivers 200 GbE bandwidth, x16 PCIe 4.0 lanes, and built-in cryptographic accelerators that combine with an advanced packet processing pipeline to deliver line-rate security. The card’s standard interfaces allow native integration with servers like the PowerEdge R760, and the IPU equips ample compute and memory to host a reverse proxy server like F5’s NGINIX Plus.

NGINX Plus, built on an open-source web server, can be deployed as a reverse proxy server to intercept and decrypt/encrypt traffic going to and from a destination server. This separation helps mitigate DDoS attacks but also means cryptographic operations can take place somewhere other than the AI model server host.

The F5 Networks NGINX Plus reverse proxy server provides SSL/TLS encryption as well as a security air gap between unauthenticated inferences and Intel® OpenVINO toolkit model servers running on the R760. In addition to operating as a reverse proxy server, NGINX Plus provides enterprise-grade features such as security controls, load balancing, content caching, application monitoring and management, and more.

Streamline AI Model Security. Focus on AI Value.

For all the enthusiasm around AI, there hasn’t been much thought given to potential deployment drawbacks. Any company looking to gain a competitive edge must rapidly integrate and deploy AI solutions in its tech stack. But to avoid buyer’s remorse, it must also be aware of security risks that come with AI adoption.

Running security services on a dedicated IPU not only streamlines deployment of secure AI but also enhances DevSecOps pipelines by creating a distinct separation between AI and security development teams.

Maybe we won’t spend too much time worrying about AI security after all.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

The Network Edge Advantage: Achieving Business Success

In today’s rapidly evolving technology landscape, businesses increasingly turn to network edge solutions to meet the demands of real-time data processing, enhanced security, and improved user experiences. But deploying these solutions comes with its own set of challenges, including latency issues, bandwidth constraints, and need for robust infrastructure.

This podcast episode explores the world of network edge computing, and the unique challenges businesses face when deploying these advanced solutions. We discuss the critical features of network edge devices and how AI can help drive efficiency. Additionally, we examine the specific challenges and demands industries encounter and how they can overcome them.

Listen Here

[Podcast Player]

Apple Podcasts      Spotify      Amazon Music

Our Guest: CASwell

Our guest this episode is CK Chou, Product Manager at CASwell, a leading hardware manufacturer for IoT, network, and security apps. CK joined CASwell in 2014 and has since worked to build strong customer relationships by ensuring that CASwell’s solutions meet specific needs and standards.

Podcast Topics

CK answers our questions about:

  • 2:42 – The move to the network edge
  • 6:17 – Network edge devices built for success
  • 11:15 – Moving to AI at the network edge
  • 14:37 – Addressing network edge challenges
  • 17:30 – Overcoming the increased demand
  • 22:37 – Implementing network edge devices
  • 25:32 – Partnering on performance and power

Related Content

To learn more about the network edge, read AI Everywhere—From the Network Edge to the Cloud. For the latest innovations from CASwell, follow them on LinkedIn.

Transcript

Christina Cardoza: Hello and welcome to “insight.tech Talk,” where we explore the latest IoT, AI, edge, and network technology trends and innovations. I’m your host, Christina Cardoza, Editorial Director of insight.tech, and today we’re taking on the conversation of the network edge with CK from CASwell. But before we get started, let’s get to know our guest. CK, what can you tell us about yourself and what you do at CASwell?

CK Chou: Hi, Christina; hi, everyone. My name is CK, with over 10 years of experience in CASwell for product management. My main focus has been on serving customers in Europe and the Middle East. Over the years my mission is to build strong relationships with clients across these regions, ensuring that the solutions from CASwell meet their specific needs and standards.

And about CASwell—originally began as a division dedicated to network-security applications. Over time our expertise and focus grew, leading us to branch out and establish ourselves as a standalone company in 2007. Over the years CASwell has placed a strong emphasis on R&D to stay at the forefront of technology and innovation. However, we were not satisfied as only a player for networking, so expanded our business into information and operation applications. I should say that our journey from a small division to an independent company wasn’t just about getting bigger; it was about getting better at what we do.

Nowadays, CASwell is a leading hardware solution provider for IT and OT industry in Taiwan, specializing in design, engineering, and manufacturing of not only networking appliance but also industrial equipment, edge computing device, and advanced edge-AI solutions which can meet the demand for the current, modern applications.

Christina Cardoza: Great, and I’m looking forward to digging into some of that hardware. But before we jump into that, I want to start the conversation trying to understand a little bit more of why companies are moving to the network edge. I like how you said in your introduction: you’re trying to stay at the forefront of technology and innovation and get better at what you do. And I think a lot of businesses are trying to do the same, and they look to CASwell to help them along that journey. But why are they moving to the network edge today, and what challenges are they facing on their journey?

CK Chou: If we are talking about the edge computing, we all know that it is all about handling data right where it is created instead of sending everything to the central server. This means faster response and less internal traffic, which means it is perfect for things that need instant reactions like manufacturing, retail, transportation, financial services, and etcetera.

Let me say it in this way. Imagine you are in a self-driving car and something unexpected happens on the road. You need your car to react instantly because every millisecond counts, okay? You cannot afford a delay waiting for data to travel to a distant server and back. It’s not like waiting for a loading sandbox when you’re using your computer, right? In self-driving scenarios any delays could mean life or death. This is just an example where edge computing comes in, handling data right at the source to make those split-second decisions.

And of course it’s not just about the speed; it’s also about keeping your information safe. If sensitive data like your financial information can be processed locally instead of being sent over the internet to the central server, there’s a lower chance of it being intercepted or hacked. The less your data travels around, the safer it stays.

This kind of localized processing is also super important in other areas like health care—which needs instant diagnostic results—machines in a factory detecting problems. By processing data on the spot, edge computing help keep everything running smoothly, even in places where internet connections might be unreliable. So, in short, edge computing is all about speed, security, and reliability. It brings the power of data processing closer to where it’s needed most—whether it’s in your car or your doctor’s office or on the factory floor.

But from what I hear from some of our customers, moving to the network edge is not always easy. It’s a big step and comes with its own set of challenges. Companies face things like increased complexity in managing systems, higher infrastructure cost, limited processing power, data-management issues, and more. Despite these challenges, the benefits of edge computing are too significant to ignore. It can really boost the infrastructure performance, improve security, and save the overall cost, and eventually making it worth the effort to overcome all those hurdles.

Christina Cardoza: Yeah, absolutely. I can definitely see the need for network edge and edge computing with all the demands of the real-time data processing, like you mentioned—the enhanced security, improving user experiences.

But I feel like a lot of times when we discuss the edge it feels very abstract. We know all of the benefits and why we should be moving there, but how do we move there? Is there a network-edge device, for instance, that is able to help us move to the edge and get all of these benefits? What does that look like?

CK Chou: Challenges that I mentioned earlier make moving to the edge seem expensive and complicated, but if companies can have reliable edge devices integrated, it can provide innovative, dependable, and affordable hardware features to help the companies to overcome these challenges so they can allocate their limited resources and focus more on building and managing their infrastructure, maintaining their data, and improving the security, or training their staff.

That’s why companies need to work closely with the edge-device provider, like CASwell. Our customers can always count on us because we design the right equipment for the right use case and ensure the edge devices are the key for their edge journey and make their transition to the edge smoother and easier. So, at the end of the day, having the right device with the right features are essential, but it’s only with the right partner—like CASwell. We support them from the hardware perspective, allowing companies to focus more on their specialization. Each party plays its own role, enabling companies to truly do more with this in their edge journey.

Christina Cardoza: I know you mentioned obviously it’s important to have the right features and reliable, affordable hardware, and that helps you build and manage infrastructure and maintain that data that’s really important. But can you talk a little bit more about what those features and hardware capabilities look like? When companies are looking for a network-edge device, what type of capabilities are really going to bring them success?

CK Chou: Okay, it is a tricky question for me. If I’m talking about my dream edge device, it needs to be small and compact, also packed with multiple connection options like SNA, Wi-Fi, and 5G for different applications. And it would also be nice to have a rack design that can operate in a harsh environment and handle the right range of temperature if users want to install the equipment in stony cold mountains or hot deserts. Also, offer powerful processing but consume low power. And, of course, the most important thing is the cost for this all-in-one box needs to be extremely low.

Getting all that in one device sounds perfect, right? But do you really think that would even be possible? Okay, I can tell you the truth is, companies at the edge don’t really need an all-in-one box. What they really need is a device with the right features for their specific environment and application, and that’s what CASwell is all about.

We have a product line which can provide a variety of choices, from the basic models to high-end solutions and from IT to OT applications. Whether it’s for a small office, a factory, or a remote location, we have got options designed for different conditions and requirements. So, with the right partner, companies can easily find the right edge device without paying for features they don’t really need.

Moving to the edge computing certainly costs a lot, so we need to do it smart and efficient. The idea is to ensure that every edge player can get exactly what they need to optimize their operations and stay ahead of this game. So, sorry that there’s no certain answer for your question here. In my opinion, if an edge device can offer the right features, right capabilities with an affordable cost for the specific use case, then it’s just a good edge device that we are looking for.

Christina Cardoza: Yeah, absolutely. No, I love that businesses or companies, they don’t necessarily need an all-in-one box. I think so many times the businesses are focused on finding something that is cost effective that tries to meet all their needs, and they sort of lose sight of what their needs actually are and how a device can help them and the benefits in the long run. So, that’s definitely great, and I want to get into how partnerships work with CASwell, as well as the different product lines that you do have a little bit deeper.

But before we get there I’m a little curious, because obviously when we talk about edge today, AI is so closely related to it. AI at the edge is a term that’s going around these days, and so I’m curious what the role here is at the network edge, especially when we’re talking about network-edge devices.

CK Chou: We know that nowadays AI-model training is done in the cloud due to its need for massive amounts of data and high computational power. If you do a quick search online, you’ll find lots of pictures showing how an AI factory or AI data center need to be. Imagine something the size of a football field and filled with dozens of big blocks, and each block is packed with hundreds of servers, all linked together working nonstop on model training.

I agree that such an AI server sounds amazing, but this is too far from our general use case and not is able to be afforded by our customers. As we talked about earlier, the concept of edge computing is all about handling data right where it is created instead of sending everything to a central server. So, if we want to use AI to enhance our edge solutions, we cannot just move the entire AI factory to our server room, unless you are super rich and your server room is the size of a football field.

Instead, we keep the heavy-duty, deep learning tasks in a centralized AI center and shift the inference part to the edge. This approach requires much less power and data, making it perfect for edge equipment. We’re already seeing this trend with AI integrated into our everyday devices, like mobile phones and AI-enabled PCs. These device use cloud-trained models to make smart decisions and provide our personalized experiences and enhance user interaction.

Building on this trend, edge-AI servers are coming into the picture of CASwell by integrating with the general computability; we often use a GPU engine here. This edge server can handle the basic AI calculation on top of our existing hardware. This means faster decision-making and the ability to use AI-driven insights in real time, whether it’s for cybersecurity, small factories, or other edge applications.

CASwell is now building a new product line for edge-AI servers designed to bring AI capabilities right from the data center to the edge, giving us the power of AI instantly, and it puts AI directly in the hands of those who need it and right when they need it.

Christina Cardoza: So, tell me a little bit more about that product line or the other products that CASwell offers. You mentioned that you have a whole suite of tools to help businesses depending on what their needs are, their demands, and what they’re trying to get. So, how is CASwell helping these businesses address their network-edge challenges and demands?

CK Chou: I can introduce a model, CAF-0121. The CAF-0121 is an interesting entry-level desktop product from CASwell, built around Intel’s new generation Atom® processor, which offers a great balance of performance and power efficiency. This small box also can provide 2.5 gig support to fulfill the basic infrastructure connectivity, plus its compact and fanless passive-cooling design, which is suitable for edge computing applications.

But we can see a trend where the edge environments are becoming more challenging than we initially expected. End users want to install edge equipment not just in the office space with air conditioning or on clean, organized racks, but also in OT environments like a warehouse, factory floors, and even cabinets without proper airflow. The line between IT and OT is becoming more broad, and more users are looking for solutions that can work in both IT and light OT environments.

As a compromise, CASwell decided to develop this CAF-0121 that can handle a wider temperature range from the typical 0º–40º up to something like -20º–60º. Our goal with this new model is to provide OT-grade specs at an IT-friendly price. This means users can cut down on the resources needed to manage their infrastructure and make deployment much simpler. They can use the same equipment across both IT and OT applications, making it easier to standardize and maintain their technology setup. So the approach for CAF-0121 allows business to adapt to different environments without needing separate solutions for each scenario, which is really an exciting product.

Christina Cardoza: Yeah, that’s great that you developed the CAF-0121 to help businesses in all of their needs. It occurs to me as we’re talking about this, the different temperature ranges that they need to meet, the cost ranges, that not only are businesses having challenges, but sometimes it can be challenging for partners like CASwell to create these solutions that meet their demand.

So, I’m just curious if there’s any insight that you can provide when developing this product, if you guys had any challenges to meet all of these demands and how you were able to overcome them?

CK Chou: The technology around the thermoelectric module—we call it TEM—is the one we are relying on for CAF-0121. TEM is already a proven solution for cooling overheating components. It is common in things like medical devices, car systems, refrigerators, water coolers, and other equipment that needs quick and accurate temperature control.

These slim devices work on creating a temperature difference when electric current passes through them, causing one side to heat up and the other side cool down. The more current we send through, the bigger the temperature difference we get between the two sides. And of course TEM does not run on its own. It is controlled by a microcontroller and the thermal sensor that monitors the temperature inside the device. The firmware that we have programmed into the microcontroller takes those temperature readings and decides when to turn the TEM on and how much current we should send through.

We have gone through countless trials and adjustment with the firmware settings to ensure our equipment stays in the ideal temperature range. And we also had to watch out about the condensation reaction, because if a TEM cools down too quickly, it can cause moisture to form on the module surface. And if the moisture gets onto the circuit board, it could cause serious damage. So an appropriate liquid isolation solution between moisture and a circuit board is also necessary.

While people are normally using the cooling capability of the TEM, we had a different idea of why not leverage both the cooling and heating capability to help our edge device to operate in a wider temperature range? So the overall concept is that by leveraging the heating capability of the TEM, we can indirectly expand the operation temperature range of the system to a lower degree. And, conversely, by using the cooling capability it can cool down the system when the internal ambient temperature rises to a certain high level.

Let me say it in a simple way. When the room is getting cold, TEM operates as a heater; and when a room is getting hot, TEM operates as a cooler. With a TEM, we are no longer limited to the operation temperature range of the individual components we have selected. It helps us bridge the gap, allows us to expand the temperature range of our equipment beyond what the components could typically allow. This means we can push the temperature boundaries by using the TEM and the device can still maintain reliability.

And some people might think, why don’t we just use industrial-grade components that support a wider temperature range and make our life easier? Reality is those wide-temp components can sometimes cost twice as much as standard commercial ones, plus the general chassis designed for this case is usually large and heavy. And then of course the most important reason is if we build our equipment just like everyone else, why would customers choose us over the competition? If that is the case, CAF-0121 would just end up being another costly device with bulky thermal fans designed to support wide temperature ranges, and this is not what we want.

That’s why we have put a lot of effort into studying the characteristics of the TEM more closely and focusing on selecting the right thermal-conductivity materials, fine-tuning our firmware settings, and testing our device in temperature-control chambers day and night. Our goal is to redefine what edge computing hardware can be by offering solutions that are adaptable to various temperature environments, compact and lightweight, and also still being competitively priced.

Christina Cardoza: Yeah, it’s amazing to hear those different wide ranges of temperature environments you were mentioning in cars and refrigerators, so I can see the importance of making sure that it’s consistently reliable and it provides that performance.

So, do you have any customers that have actually been using CAF-0121 and anything you can share with how they’re using it or in what type of solutions it is in?

CK Chou: This box is going to mass production in October this year, which is the next month, and we have already got a few thousand purchase orders from a major European customer focused on cybersecurity applications and planning to use this device in small office, warehouse, and possibly outdoor cabinets for electric-vehicle charging stations that need wider temperature support. This really highlights the advantage of CAF-0121. The customer can use it across both IT and OT applications without needing separate solutions for different operation temperature conditions, and of course saving customers from having to spend extra money.

We also sent samples to around seven to eight potential customers across various industries here, including cybersecurity, SD-WAN, manufacturing, and telecom companies for instant traffic management. The feedback has been fantastic. Everyone loves the competitive price, which makes our device a great deal. And also the compact size is another big win, because it can fit into tight spaces and helps lower our shipping cost. Also, reduces the carbon footprint.

You know, in today’s market, pricing is a huge factor. We need to do cost-effective solutions but cannot compromise on performance and flexibility. So it’s clear that our approach is hitting the mark for customers who need the reliable and scalable edge solutions that don’t break their bank. The excitement we are seeing from these industries really proves that we are on the right track, and CAF-0121 is exactly the kind of solution that can make their needs.

Christina Cardoza: I can definitely see why the solution needs to be smart and compact, but then also fast and reliable, high performance. So, I’m curious how you actually make that happen. And I should mention “insight.tech Talk” and insight.tech as a whole, we are sponsored by Intel, but I know Intel has a lot of processors that make these devices possible, that make them be able to run fast in these different environments and in these small form factors. So, I want to hear a little bit more about how you work with technology partners like Intel in making your product line possible.

CK Chou: As we discussed earlier, a solid edge computing device should have just the right processing power packed in a compact size, a variety of connection options, energy efficient, and of course a competitive price. These are really the basic must-haves for any edge computing device.

That’s why we have chosen the Intel Atom processor for this project. With the Atom we can provide the right level of performance and still keep power consumption low. And also thanks to Intel LAN controller that helps us easily add the support for 2.5 gig Ethernet to this box to ensure the capability with most infrastructure requirements and more. The Atom has built-in instructions that can accelerate IPsec traffic, making it an excellent choice for security-focused applications. So, whether you are dealing with data encryption, secure communications, or other security jobs, this processor is up to the challenge.

And if we wanted to further enhance the security, Atom is also integrated with BIOS Guard and Boot Guard to provide a hardware root of trust. With these two guards we are not just talking about great performance and efficiency, we are delivering a high level of protection for the BIOS and the boot-up process. This level of security is crucial, especially for edge devices that need to handle sensitive information and critical tasks without compromising protection.

I can say that only Intel offers a one-stop shop for all these features among the various players in this market. They don’t just provide the hardware, but also the driver and firmware support. This level of integration has made the development of the CAF-0121 project so much easier, and it has really shortened our time-to-market. When you have got the processing power, security features, and even software support all coming from one reliable partner, Intel, it certainly streamlines the whole process. This not just simplifies the engineering and development work but also ensures everything works seamlessly together.

So, with Intel’s comprehensive support, the hardware designer—like CASwell—can focus more on optimizing performance and less on troubleshooting capability issues. This is a big win for both us and our customers, allowing us to deliver high-quality, reliable edge computing solutions faster and efficiently.

Christina Cardoza: Absolutely; that’s great to hear. And I’m sure—we kept talking about in this conversation making things more cost effective, more affordable, so I’m sure being able to leverage the technology expertise or the technology processor and other elements from a partner like Intel, that helps you be able to focus on your sweet spot and not have to build things from scratch and make things more expensive than they need to be. So, great to hear how you’re using all of that different technology.

It’s been a great conversation. You’ve really been able to take a technical topic and make it more digestible and understandable. Unfortunately, we are running out of time, but before we go I just want to throw it back to you one last time, if you have any final thoughts or key takeaways you want to leave our listeners with today.

CK Chou: I started working at CASwell 10 years ago, and things were pretty different back then. At that time most of the processing power was centralized. Companies were all about making their server super powerful, giving them the fast internet connections for gathering all the data from the edge. Servers were packed with multiple features to handle every use case you could imagine.

Times have changed. It’s all about instant processing and real-time AI calculations. Businesses need to make quick decisions right at the source of the data instead of sending everything back to the central server. That’s why edge computing has become such a big deal. It lets companies process data on the spot without any delay.

But when all the network players are shifting toward edge solutions, the real challenge is how do we make our equipment different and better than everyone else? So this project, CAF-0121, we have gained some really valuable know-how using an old-school technology as an innovative thermal solution for edge equipment and tried to bring added volume to our products in this highly competitive market. We also want this small success to inspire our R&D team to stay creative and think outside the box, and not just stick to the traditional way of doing things.

Also, thanks to the support from Intel about their edge solutions, including edge-optimized processors—which build in deep learning–inference capabilities—various LAN options for different connectivity needs; and of course including all the documents for integration, drivers, and firmware support. This collaboration has really helped us push our designs to the next level.

Finally, our goal is very simple: aiming to set a new standard of edge computing equipment and providing flexible edge solutions to help customers tackle challenges from the cloud and through the network and all the way to the intelligent edge.

Christina Cardoza: Well, I can’t wait to see what else CASwell does in this space—as well as the CAF-0121 when that comes—different market solutions that companies are going to be leveraging this for. I invite all of our listeners to visit the CASwell website, contact them, see how they can help you in all of your edge and network-edge needs. As well as visit insight.tech as we continue to cover partners like CASwell and how they’re innovating in this space.

So, I want to thank you again for joining us today, CK, as well as our listeners for tuning in. Until next time, this has been “insight.tech Talk.”

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

This transcript was edited by Erin Noble, copy editor.

Intel® Xeon® 6 Processors Power AI Everywhere

Organizations worldwide deploy AI to increase operational efficiencies and increase their competitive standing in the market. We talk to Intel’s Mike Masci, Vice President of Product Management, Network & Edge Group, and Ryan Tabrah, Vice President & General Manager, Intel Xeon and Compute Products, about the new Intel® Xeon® 6 Processors. Mike and Ryan discuss key advancements that power the seamless and scalable infrastructure required for running AI everywhere—from the data center to the edge—in a more sustainable way.

Why is the launch of the Intel Xeon 6 Processors so important to Intel, your partners, and customers?

Ryan Tabrah: The launch is a culmination of many things, including getting back to our roots of delivering technology starting from the fabrication process to enable the AI data center of the future. I think Intel Xeon 6 hits at a perfect time for our customers to continue to innovate with their solutions and build out their data centers in a way that wasn’t possible before. With Intel Xeon 6 processors, E-cores are optimized for the best performance per watt, while the P-cores bring the best per-core performance for compute-intensive workloads that are pervasive in the data centers of today.

Mike Masci: We see Xeon 6 not just as another upgrade, but as a necessity for the AI-driven compute infrastructure. The existing data center does not have the performance per-watt characteristics that allow data to scale for the needs of an AI-driven era. So whether it be networks needing to process huge amounts of data from edge AI to cloud AI, the these processors do so in a more efficient and performant way. And within a data center, it enables the infrastructure to support the performance needs of AI while being able to scale linearly.

The consistency of the Xeon 6 platform from edge to cloud and the fact that it can really scale from the very high end to more cost- and power-focused, lower-end products is what developers want. They want an extremely seamless experience where there is no need to mix and match different architectures and systems, because anything that slows them down or creates friction effectively is less time spent on developing AI technology.

Intel Xeon 6 is the first Intel Xeon with efficient cores and performance cores. What are some examples of their different workloads and relevant use cases?

Mike Masci: First, efficient cores are designed and built for data center class workloads and are highly performant at optimized density and power levels. This is a huge advantage for our customers in terms of composability and the ability to partition the right product for the right workload in the right location without having to incur complexity and expense of both managing and deploying.

It’s becoming the norm to deploy the same type of workloads at the network edge that are running deep into the data center. People want the same infrastructure back and forth, so it enables them to deploy faster and easier, and save money in the long run.

The most important workloads are cloud native. And that’s where the Intel Xeon 6 E-cores shine. As we think about use cases that take advantage of that, on the network and edge side, the 5G wireless core is one of our most important segments. Where in prior generations it was fixed-function, proprietary hardware, these companies have adopted the principles behind NFV (Network Functions Virtualization) and SDN (Software Defined Networking) and are now moving toward cloud-native technology. This is where the multi-thread performance per-watt optimized piece of Intel Xeon 6 processors is extremely important.

As we look at Intel Xeon 6 with P-cores for other edge applications, customers are very excited about Intel® Advanced Matrix Extensions (Intel® AMX) technology. Specifically, its specialized vector ISA instructions, inherent in the performance cores, allow them to do lightweight inference on the edge where you might not have the power budget for large-scale GPUs that are typical of training clusters. And the beauty of AMX is it’s seamless from a software developer standpoint, and with tools like OpenVINO and our AI Suites, they can take advantage of AMX without having to know how to program to a specific ISA.

Ryan Tabrah: The reality is that, especially at the edge, customers can’t put in some of the more power-hungry or space-hungry accelerators, and so you fall back on the more dense solutions that are already integrated into the Intel Xeon 6 performance core family.

Video is another good use case example. You don’t make money until you can effortlessly scale and pull videos out and push them to the end user. That’s one reason why we focused on the rack consolidation ability in taking a video workload. It’s something like three-to-one rack consolidation over previous generations for the same amount of videos that you can stream at the same performance. It’s better performance at a better energy efficiency in your data center, being able to serve more clients with fewer machines and greater density. And that same infrastructure can then be pushed out to your 5G networks, to the edge of your network where you’re caching videos and deploying them to end users.

Can you talk about the Intel Xeon 6 in the context of a specific vertical and use case?

Mike Masci: Take healthcare, where you need a massive amount of data to train medical image models. In order to have actionable data and insights, you need to train the model in the cloud and run it effectively at the edge. You need to run things like RAG (Retrieval Augmented Generation) to make sure the model is doing what it’s supposed to do, especially in the domain of assisting with diagnosis, for example. So what happens when you need to retrain the model? Edge machines will send more data to the cloud, where it gets retrained, and then has to get proliferated back to those edge machines. That whole process for a developer in DevOps and MLOps is an entire discipline, and it’s probably the most important discipline of AI today.

We think that the real value of AI is going to be meaningfully unlocked when you can have trained models, then you can deploy them at the edge, you can then have the edge refeed the models to get trained in the cloud. And having them on a scalable system matters a lot to developers.

Ryan Tabrah: Also, healthcare facilities around the world have a lot of older code, older applications running on kernels that they don’t want to upgrade or do any work. They want to be able to move those workloads, maybe even containerize them, put them on a system they know will just run and they don’t have to touch a thing. We enable them with open-source tools to update the parts of their infrastructure, and new data centers to bring the future into, and connect with, their older application base.

And that’s where the magic really happens, that someone doesn’t fundamentally have to start from ground zero. Healthcare institutions have all this old data, old applications, and then they’re being pushed to go do all these new things. And that’s back to Mike’s earlier comment that just having a consistent platform underneath from your edge to the actual cloud where you’re doing your development to even to your PC, they just don’t have to worry about it.

What are the sustainability aspects that Xeon 6 can bring to your customers?

Mike Masci: The performance-per-watt improvements across some of our network and edge workloads is clear. It’s 3x performance per watt versus 2nd Gen Intel® Xeon® Scalable Processors. Simply translated, if you get 3X performance per watt, effectively you can reduce the number of servers that you need by one third. That doesn’t just save you CPU power, but it saves you the power of the entire system, whether it be the switches or the power supply of the rack itself or any of the peripherals around it.

And it’s our mandate as Intel to drive that type of sustainability mechanism, because in large part the CPU performance per watt dictates the choices that people make in terms of deploying overall hardware.

A great example is the work we’ve done with Ericsson, a leading ISV provider in the 5G core. In their own real-world testing on UPF, which is the user plane function of the 5G Core, they had 2.7x better performance per watt versus the previous generation. Even more, in the control plane with 2 million subscribers, Ericsson supported the same number of subscribers with 60% less power versus prior generation. This comes back to the performance per watt and sustainability. But it is also about significant OpEx saving and doing a lot of good for the world as well. With Ericsson, we are proving it’s not just possible, but it’s happening in reality today.

In this domain we have our infrastructure power manager, which allows for dynamically programming the CPU power and performance based on actual usage. For example, when the load is low, the CPUs power themselves down. And underlying that is the entirety of the product line has huge improvements in terms of what we would call load line performance. Most servers today are not run at full utilization all the time. Intel CPUs like the Intel Xeon 6 do a great job of powering down to align with lower utilization scenarios, which again lowers overall power need—improving platform sustainability.

This seems fundamental, but it’s harder to do than you would think. You need to optimize at an operating-system level to be able to take advantage of those power states. You need to make sure that you have the right quality of service, SLA, and uptime, which is a huge deal.

Ryan Tabrah: The efforts we make across the board—in our fabrication, our validation labs, and our supply chain that feeds all our manufacturing—demonstrates our leadership in sustainability. When a customer knows they’re using Intel silicon, they know that when it was born or tested or validated or created, it was done in the most sustainable way. We’re also continuing to drive leadership in different parts of the world around reuse of water and other things that give back to the environment as we build products.

Intel Xeon 6 offers our customers the opportunity to meet their sustainability goals as well. With the high core counts and efficiency that Intel Xeon 6 brings, our customers can look to replace aging servers in their data center and consolidate to fewer servers that require less energy and floor space.

Let’s touch on data security and Intel Xeon 6 enhancements that make it easier for developers to build more secure solutions.

Mike Masci: As we look at security enhancements, which is paramount, especially on the network and edge, bringing our SGX and TDX technologies was a big addition. But technology to maturity in terms of security ecosystem is extremely important for customers, especially in an AI-driven world. You need to have model security. You need to be able to have secure enclaves if you’re going to run multi-tenancy, for example, which is becoming extremely important in a cloud-native-driven world. And overall, we really see that maturity of security technologies on Intel Xeon 6 being a differentiator.

Ryan Tabrah: We built Intel Xeon 6 and the platform with security as the foundational building block from the ground up. It’s what we’ve been doing for several generations of Xeon, and we’re making confidential computing as easy and fundamental as possible in the partner ecosystem. With Intel Xeon 6 we are introducing new advances in quantum resistance and platform hardening to enable customers to meet their business goals with security, privacy, and compliance.

Is there anything that you’d like to add in closing?

Mike Masci: Intel Xeon 6 is in a position that’s necessary for AI at the edge and in the network. And we think the idea of making an easy, frictionless platform that also serves multiple workloads easily with composability, is a home run. To me that is the key message of Intel Xeon 6. It’s seamless and scalable so that you can have the same application running on the edge that you have in the data center and without worrying about what hardware it’s running on.

Ryan Tabrah: I agree. Especially in different environments and areas where people are just fundamentally running out of power in their data centers, whether it’s just because they can’t build them fast enough or there are new restrictions and clean energy requirements. We have the solutions in place from their edge to their data centers that just make it super easy for them to see the benefits.

And the best validation, I think, is that it is the feedback from the customers. They want more of it. They want to do more with us. They want to help us not only ramp up the processors as quickly as possible, but then build the next generation as quickly as possible, too. Because they’re excited that Intel is taking a leadership position in key critical parts of telco, edge buildout, infrastructure buildout, and data center, and we are excited to be leading with them.

 

Edited by Christina Cardoza, Editorial Director for insight.tech.

Technology Partnerships Pave the Way to Business Solutions

Many enterprises are eager to adopt the latest technologies, which can help them supercharge efficiency and light the way to better products and services. Innovative solutions are emerging rapidly, offering early adopters an attractive competitive edge. Yet deploying fully integrated solutions is so complicated and time-consuming that many organizations give up after initial trials.

Working with an experienced technology partner like a solutions aggregator can ease frustrations of technology adoption and pave the way to successful deployment. A knowledgeable aggregator can offer end-to-end help specifically designed to accommodate a company’s existing infrastructure and future goals. Some can open the door to a worldwide network of partners and systems integrators. By overseeing the entire process of solution selection, integration, deployment, and scaling, an expert aggregator can proactively remove stumbling blocks and enable companies to get the right solutions up and running quickly at locations across the globe. 

Technology Integration Roadblocks

Enterprises struggle to incorporate new technology for several reasons. Solutions usually require a mix of hardware and software components that must work together seamlessly and fit—or be made to fit—existing infrastructure. Since many large companies use different combinations of legacy and newer technology in different locations, assessing interoperability is a complex process. Multinational firms must also consider regional technology standards and regulations.

“If you’re an IT leader with dozens of objectives on your desk, the last thing you want to do is become a general contractor for every solution,” says Matt Powers, Vice President of Technology and Support Services at Wesco, a leading global supply chain solutions provider. “Engineering the design and choosing multiple contractors for each project becomes an elaborate exercise.”

And during that exercise, the connective infrastructure may shift, adding further complications. “You cannot imagine how fast solutions are evolving,” Powers says. “Innovations are constantly changing the interdependencies between technologies.”

Another hurdle is scaling. Companies often test potential solutions with encouraging results, only to be disappointed when they try to extend deployments.

“You see that a lot, especially with IoT solutions,” Powers says. “Companies will tell us, ‘We’re running our proof of concept (POC) and seeing the results and value we want, but now how do we scale this solution across our global enterprise?’ This is a major challenge for global customers as they need to access and localize technology for different regions. Additionally, they need to identify and work with deployment partners, such as integrators and contractors, to ensure the solution is implemented effectively.”

Technology Partnerships Deliver Innovative Solutions

With more than 100 years of experience as a solutions aggregator and distributor, Wesco can help a wide range of enterprises—including manufacturers, utilities, data centers, retail, and hospitality companies—avoid implementation problems and deploy the right solutions efficiently. The process begins with obtaining a thorough understanding of an organization’s needs.

“What we do differently from other companies is work very closely with stakeholders to understand their particular challenges and assess their opportunities for adding efficiencies or gaining return on their investments,” Powers says. “Once we do that, we can help lead them to the right ecosystem of solution providers and integrators.” To this point, Wesco’s vetted partner network includes more than 50,000 suppliers of hardware, software, and cloud solutions, and integrator partners across the globe.

“What we do differently from other companies is work very closely with stakeholders to understand their particular challenges and assess their opportunities for adding efficiencies or gaining return on their investments.” – Matt Powers @wescocorp via @insightdottech

“The number-one quality we look for in technology partners is their capacity for innovation,” Powers says. “Intel brings us a wide breadth of leading technologies, and the open architecture of its products allow our solutions providers and independent software vendors (ISVs) to develop platforms a variety of end users can access.”

Technology Integration: A Win-Win for Customers and Providers

Wesco strives to be a trusted advisor—suggesting the components, solutions, and partners that work best for each company’s unique environment. WaitTime, an ISV that builds crowd analytics solutions, is one example of how Wesco deploys complete solutions for its customers—from the network edge to the cloud.

WaitTime software applies Edge AI to computer vision cameras, providing information like capacity, crowd density, and shopper insights to venue operators. The software—powered by Intel—is optimized to process data on-site and provide alerts in near-real time.

With WaitTime, companies can catch and solve crowd problems sooner, pre-empting potential hazards and dispatching employees to chokepoints before problems occur. They can also learn where guests or shoppers spend their time, or which areas would benefit from wider pathways, better wayfinding, or other improvements. Making these changes can lead to higher revenue at shops or concession stands.

While using WaitTime is simple to use once it’s set up, deployment involves far more than installing the platform. “The software is one piece of the technology. We can bring the other hardware and installation partners together to build an end-to-end solution,” Powers says.

What kind of providers? That depends on the organization.

Companies may or may not be able to upgrade existing security cameras with computer vision. And they must find networks and hardware that can reliably transmit and process enormous volumes of information while meeting all local security and privacy regulations.

These are just a few pieces of the puzzle that organizations must connect before deployment. Wesco can help them make sound decisions and select the right contractors and systems integrators to build, harmonize, and scale all elements of the solution according to their needs.

Technology and Experience Accelerate Business Success

As technology change accelerates, cutting-edge solutions become increasingly important to success, Powers says. “Innovation is rippling across industries quickly, and competition is not slowing down. By understanding how new technologies can help the business and how to deploy them at scale, enterprises can continue to thrive as new capabilities emerge.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

3D LiDAR Delivers Spatial Intelligence

Picture the last time you were at a concert or a similar large event in the heart of a busy city. You probably struggled to find parking or went through security clearance hassles at the gate. And it was not easy to battle lines at the concession stand or the restroom. Eliminating these annoyances would dramatically improve visitor experiences. It would help event organizers, too.

It’s why airports, city governments, and entertainment venues bet on LiDAR (Light Detection and Ranging), a pulsed laser-based technology specially suited to delivering spatial intelligence. Using LiDAR delivers information not just on the numbers of people and vehicles but their flow and interactions. This means that organizers can maintain security and spot and alleviate bottlenecks in real time.

Advantages of 3D LiDAR

“At large infrastructure sites and events, security and crowd management are not easy, but they’re jobs LiDAR is especially good at,” says Raul Bravo, President and Founder of Outsight, leader in 3D LiDAR solutions.

“While most people might think of CCTV and IP video cameras when it comes to monitoring devices, their two-dimensional capabilities are limited for tracking a three-dimensional physical world,” Bravo says. “Unlike traditional computer vision, LiDAR can’t tell if a person is wearing a red shirt or green, but it knows that person’s speed or location—while delivering data as a 3D capture.”

Because of LiDAR’s capabilities, digital twins have been using the technology to obtain data about the physical world for a while now. “What is emerging is using LiDAR technology to not only map the static physical world but to digitize the real-time movement of people and vehicles,” Bravo says.

Also reassuring to organizations is that LiDAR is a natively anonymous solution. Because LiDAR does not capture images but only the distances of things, privacy is baked in by definition. Monitoring crowds without capturing people’s pictures and maintaining privacy is key to meeting a host of governmental regulations.

Because of LiDAR’s capabilities, #DigitalTwins have been using the #technology to obtain #data about the physical world for a while now. @Outsight_tech via @insightdottech

3D LiDAR Data Processing Challenges

While LiDAR has many advantages, processing the resulting data is not easy. Plugging 3D spatial intelligence data into traditional computer vision techniques delivers poor outcomes. Instead, “you have to create specific algorithms and techniques that tackle this specific kind of problem,” Bravo says.

Also challenging is the sheer volume of data that LiDAR generates. “When we deploy LiDAR at some of the biggest airports in the world, we have hundreds of LiDAR units at the same time,” Bravo says. “The data from each is the equivalent of a hundred people streaming Netflix.”

The diversity of designs, models, and manufacturers in the LiDAR space is also a problem. The Outsight platform solves for all these challenges and works with any LiDAR manufacturer or model. The solution develops living digital twins, feeding information about the physical world at a rapid-enough clip—20 times per second—to deliver insights in real time. These insights can route to the right person and take the form of alarm alerts.

Visual-Spatial Intelligence Use Cases

The problems that Outsight can help resolve apply to smart cities, transportation hubs, and international events. If too many people are crowded in airport baggage check-in areas, the solution can alert officials downstream of potential traffic jams in the security lines, which they can then staff for.

For example, the city of Bellevue, Washington, uses Outsight’s LiDAR solution to detect problems with near-miss situations at intersections, when vehicles get too close to bicycles or pedestrians. LiDAR was especially well-equipped to capture such incidents at night. The information has helped the city take more proactive measures such as clearer lane markings to meet its goal of eliminating traffic fatalities and severe injuries by 2030. Outsight LiDAR can also help smart cities address traffic flow problems in real time. For example, if a vehicle uses the wrong lane for merging, a flashing light can alert the driver to remedy the mistake.

When managing the physical flow of people and vehicles on a massive scale, you have to ensure a smooth visitor experience and operational excellence. Key performance indicators like the length of a ticketing line and time spent in one will matter.

At the 2024 Olympics in Paris, Outsight’s LiDAR solution helped with security and crowd management. Outsight was again pressed into service at Tomorrowland, one of the world’s largest music festivals, held annually in Belgium, which attracts hundreds of thousands of attendees.

The Technology Infrastructure for Spatial Intelligence

Spatial intelligence is about digitizing the physical world and creating insights out of it. Achieving this requires processing power that can handle large amounts of data with efficiency. Outsight depends on Intel products and technologies to deploy its solutions at scale.

“You need specific and highly efficient software algorithms that use CPU-based and not GPU-based solutions for energy efficiency,” Bravo says, which is what Intel CPUs deliver.

As for the future, Bravo is excited about the many possibilities—beyond smart cities, airports, and venues—for the digitization of the physical world. “You can have access to a wealth of unique insights and intelligence that was not even imagined before,” Bravo says. “We are entering a new world of digital transformation.”

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

Multisensory AI: Reduce Downtime and Boost Efficiency

When you’re waiting by the side of the road for the tow truck, isn’t that always the moment when you realize you’ve neglected your 75,000-mile tuneup and safety check? The “check oil” light and low-tire pressure alert can avert dangerous situations, but you can still end up in that frustrating and time-consuming breakdown. Now scale up that inconvenience and lost productivity to the size of a factory, where nonfunctioning machinery can result in hugely expensive downtime.

That’s where predictive maintenance comes in. Machine learning can analyze patterns in normal workflow and detect anomalies in time to prevent costly shutdowns; but what happens with a new piece of equipment, where AI has no existing data to learn from? Can some of the attributes that make humans good—if inefficient—at dealing with novel situations be harnessed for machine-based inspections?

Rustom Kanga, Co-Founder and CEO of AI-based video analytics provider iOmniscient, has some answers for these and other questions about the future of predictive maintenance. He talks about the limitations of traditional machine learning for predictive maintenance; when existing infrastructure can be part of the prediction solution—and the situations when it can’t—and what in the world an e-Nose is (Video 1).

Video 1. Rustom Kanga, CEO of iOmniscient, discusses the impact of multisensory and intuitive AI on predictive maintenance. (Source: insight.tech)

What are the limitations to traditional predictive maintenance approaches?

Today when people talk of artificial intelligence, they normally equate it to deep learning and machine learning technologies. For example, if you want the AI to detect a dog, you get 50,000 images of dogs and label them: “This is a dog. That is a dog. That is a dog. That is a dog.” And once you’ve trained your system, the next time a dog comes along, it will know that it is a dog. That’s how deep learning works.

But if you haven’t trained your system on some particular or unique type of dog, then it may not recognize that animal. Then you have to retrain the system. And this retraining goes on and on and on—it can be a forever-training.

The challenge with maintenance systems is that when you install some new equipment, you don’t have any history of how that equipment will break down or when it will break down: You don’t have any data for doing your deep learning. And so you need to be able to predict what’s going to happen without that data.

So what we do is autonomous, multisensory, AI-based analytics. Autonomous means there’s usually no human involvement, or very little human involvement. Multisensory refers to the fact that humans use their eyes, their ears, their nose to understand their environment, and we do the same. We do video analysis, we do sound analysis, we do smell analysis; and with that we understand what’s happening in the environment.

How does a multisensory AI approach address some of the challenges you mentioned?

We have developed a capability called intuitive AI. Artificial intelligence is all about emulating human intelligence, and humans don’t just use their memory function—which is essentially the thing that deep learning attempts to replicate. Humans also use their logic function. They have deductive logic, inductive logic; they use intuition and creative capabilities to make decisions about how the world works. It’s very different from the way you’d expect a machine learning system to work.

“Multisensory refers to the fact that humans use their eyes, their ears, their nose to understand their environment, and we do the same” – Rustom Kanga, @iOmniscient1 via @insightdottech #AI

What we as a company do is we use our abilities as humans to advise the system on what to look for, and then we use our multisensory capabilities to look for those symptoms. For instance, if a conveyor belt has been installed and we want to know when it might break down, what would we look for to predict that it’s not working well? We might listen to its sound: when it starts going “clang, clang, clang,” something is wrong with it. So we use our ability to see the object, to hear it, to smell it to tell us how it’s operating at any given time and whether it’s showing any of the symptoms that we’d expect it to show when it’s about to break down.

How do you train AI to do this, and to do it accurately?

We tell the system what a person would be likely to see. For instance, let’s say we’re looking at some equipment, and the most likely break-down scenario is that it will rust. We then tell the system to look for rust or for changes in color. Then, if the system sees rust developing, it will tell us that there’s something wrong and it’s time to look at replacing or repairing the machine.

And intuitive AI doesn’t require massive amounts of data. We can train our system with maybe 10 examples of the data set, or even fewer. And because it requires so few data sets, it doesn’t need massive amounts of computing; it doesn’t need GPUs. We work purely on the standard Intel CPUs, and we can still achieve accuracy.

We recently implemented a system for a driverless train. The customer wanted to make sure that nobody could be injured by walking in front of the train. That really requires just a simple intrusion system. In fact, camera companies provide intrusion systems embedded into their cameras. And the railway company had done that—had bought some cameras from a very reputable company to do the intrusion detection.

The only problem was that they were getting something like 200 false alarms per camera per day, which made the whole system unusable. So they set the criterion that they wanted no more than one false alarm across the entire network. We were able to achieve that for them, and we’ve been providing the safety system for their trains for the last five years.

Do your solutions require the installation of new hardware and devices?

We can work with anybody’s cameras, anybody’s microphones—of course, the cameras do have to be able to see what you want to be seen. Then we provide the intelligence. We can work with existing infrastructure for video, for sound.

Smell, however, is a very unique capability. Nobody makes the type of smell sensors that are required to detect industrial smells, so we have built our own e-Nose to provide to our customers. It’s a unique device with six or so sensors in it. There are sensors on the market, of course, that can detect single molecules. If you want to detect carbon monoxide, for example, you can buy a sensor to do that. But most industrial chemicals are much more complex. Even a cup of coffee has something like 400 different molecules in it.

Can you share any other use cases that demonstrate the iOmniscient solution in action?

I’ll give you one that demonstrates the real value of a system like this in terms of its speed. Because we are not labeling 50,000 objects, we can actually implement the system very quickly. We were invited into an airport to detect problems in their refuse rooms—the rooms under the airport where garbage from the airport itself and from the planes that land there is collected. This particular airport had 30 or 40 of them.

Sometimes, of course, garbage bags break and the bins overflow, and the airport wanted a way to make sure that those rooms were kept neat and tidy. So they decided to use artificial intelligence systems to do that. They invited something like eight companies to come in and do proofs of concept. They said, “Take four weeks to train your system, and then show us what you can do.”

After four weeks, nobody could do anything. So they said, “Take eight weeks.” Then they said, “Take twelve weeks.” And none of those companies could actually produce a system that had any level of accuracy, just because of the number of variables involved.

And then finally they found us, and they asked us, “Can you come and show us what you can do?” We sent in one of our engineers on a Tuesday afternoon, and on Thursday morning we were able to demonstrate the system with something like 100% accuracy. That is how fast the system can be implemented when you don’t have to go through 50,000 sets of data for training. You don’t need massive amounts of computing; you don’t need GPUs. And that’s the beauty of intuitive AI.

What is the value of the partnership with Intel and its technology?

We work exclusively with Intel and have been a partner with them for the last 23 years, with a very close and meaningful relationship. We can trust the equipment Intel generates; we understand how it works, and we know it will always work. It’s also backward compatible, which is important for us because customers buy products for the long term.

How has the idea of multisensory intuitive AI evolved at iOmniscient?

When we first started, there were a lot of people who used standard video analysis, video motion detection, and things like that to understand the environment. We developed technologies that worked in very difficult, crowded, and complex scenes, and that positioned us well in the market.

Today we can do much more than that. We do face recognition, number-plate recognition—which is all privacy protected. We do video-based, sound-based, and smell-based systems. The technology keeps evolving, and we try to stay at the forefront of that.

For instance, in the past, all such analytics required the sensor to be stationary: If you had a camera, it had to be stuck on a pole or a wall. But what happens when the camera itself is moving—if it’s a body-worn camera where the person is moving around or if it’s on a drone or on a robot that’s walking around? We have started evolving technologies that will work even on those sorts of moving cameras. We call it “wild AI.”

Another example is that we initially developed our smell technology for industrial applications—things like waste-management plants and airport toilets. But we have also discovered that we can use the same device to smell the breath of a person and diagnose early-stage lung cancer and breast cancer.

Now, that’s not a product we’ve released yet; we’re going through the clinical tests and clinical trials that one needs to go through to release it as a medical device. But that’s where the future is. It’s unpredictable. We wouldn’t have imagined 20 years ago that we’d be developing devices for cancer detection, but that’s where we are going.

Related Content

To learn more about multisensory AI, listen to Multisensory AI: The Future of Predictive Maintenance and read Multisensory AI Revolutionizes Real-Time Analytics. For the latest innovations from iOmniscient, follow them on X/Twitter at @iOmniscient1 and LinkedIn.

 

This article was edited by Erin Noble, copy editor.