Cloud Migration Accelerates as Confidence Grows

How secure is the cloud? For years, that has been the number-one question on everyone’s mind when considering moving critical workloads to the cloud. And it’s been the main reason businesses have stayed off the cloud. But all that is starting to change.

“That security concern in the early days was paramount. People just couldn’t get comfortable. But as time has passed, I think it’s pretty much been proven that the cloud can be as secure, if not more secure, than on-premises data centers,” says Jim Jordan, President and CEO of RiverMeadow, a multi-cloud migration company.

Over time, the extent of capabilities and services has fundamentally increased the cloud’s ability to run mission-critical or any workloads. The benefits can no longer be ignored. Businesses want lower CapEx costs, elasticity, and the on-demand capabilities that the cloud offers, according to Jordan.

But migration challenges haven’t changed. And most organizations still lack the in-house expertise to do it on their own.

On-Demand Cloud Migration Expertise

Jordan finds that when businesses start to move to the cloud, they quickly realize they don’t even know what’s running in their data centers. For instance, how many servers and applications do they have, and what servers make up an application?

“Now that security has kind of been checked, the key blocker is that organizations still don’t have their people trained on how to run their business in the cloud,” Jordan says.

RiverMeadow saw the need for automated cloud migrations early on when the company launched in 2013. But at the time, the market wasn’t ready yet.

It hasn’t been until the past three to four years that market maturity and confidence has started to grow, Jordan explains. But he estimates that 70% of the market still has yet to move entire states to the cloud—making it a perfect time for companies to leverage the expertise from a multi-cloud migration specialist like RiverMeadow.

“Think of RiverMeadow as the moving truck,” says Jordan. “We show up at your house, we pack up all your assets—the infrastructure and applications—we move them to your new place, we plug them in, and they work.”

RiverMeadow assesses an organization’s data center and then determines what workloads can and cannot be moved.

70% of the market still has yet to move entire states to the #cloud—making it a perfect time for companies to leverage the expertise from a #MultiCloud migration specialist like @RiverMeadow1. via @insightdottech

In the past, migration was limited to more single applications such as email or productivity apps, according to Magnus Rusk, Technical Manager EMEA at RiverMeadow. Customers didn’t trust applications to work as well in the cloud. But as more companies start to experiment in the cloud, they are quickly realizing all the possibilities.

“It’s database applications, it’s ERP systems, it’s just everything out there,” Rusk says. “Customers are starting to migrate more complex applications because they have more trust in the cloud.”

A Lift, Shift, and Optimize Cloud Migration Strategy

When organizations need to get to the cloud as quicky and easily as possible, RiverMeadow takes a lift-and-shift cloud migration approach. That’s why assessing an organization’s application landscape and all its dependencies upfront is so vital. It allows a cloud migration to be swift and efficient, with little disruption as the applications are effectively “lifted” from their on-prem environment and “shifted,” as is, to the target private or public cloud.

If businesses are running on legacy operating systems, they can also “lift and optimize” their workloads as they move to the cloud with RiverMeadow’s OS modernization capability. This enables them to upgrade their legacy Windows and Linux operating systems to more current versions, helping them to reduce operational risk and avoid extended support costs.

RiverMeadow also helps businesses right-size for the cloud, so they don’t over-provision resources and end up paying too much for memory or CPU. Business can scale up or down resources and memory as needed, which is often a problem with on-premises data centers, according to Rusk.

“We have a customer that bought a whole bunch of new gear for its on-prem data center. And it was two to three months before they could physically get to use that gear. The benefit of cloud is, effectively, with the swipe of a credit card, I can spin up more storage capacity if that is what I need. I can spin up virtual machines,” he says.

The Need for Cloud Migration Speed

RiverMeadow recently did a migration for Mitel Communications in less than 90 days (Video 1). “They were under a time pressure,” says RiverMeadow’s Global Marketing Director Emma Tompkins. “And we were able to do a phenomenally quick migration for them.” The migration consisted of moving 1,000 VMware workloads that run customer services to Google Cloud VMware Engine.

Video 1. RiverMeadow lifts, shifts, and optimizes workloads for customers in a fast, efficient way. (Source: RiverMeadow Software)

Another project involved moving the IT infrastructure of Cambridge University Press to the cloud—which resulted in substantial CapEx and OpEx savings, according to Tompkins. RiverMeadow was able to migrate about 750 servers from two on-premises data centers to Amazon’s AWS cloud services in about six months.

Post-migration, RiverMeadow performed acceptance tests to validate that applications and components were running as expected. It helped automate common operational tasks such as application monitoring, operating system monitoring, endpoint protection, and patch management.

Making Way for More Cloud Migrations

According to Tompkins, what those customer migrations had in common was that it was really a top-down mandate to move to the cloud. Whether it is about controlling costs or reducing OpEx costs, top business leaders are the ones making the decision to migrate.

“Everyone is saying, ‘We may not know everything we need to know about this thing called cloud, but we have to have a cloud strategy as a corporation, as a strategic mandate,’” says Jordan. And then it is up to the IT teams to figure it out and do the heavy lifting.

To address these challenges, RiverMeadow works with partners like Intel®. Not only is Intel providing the hardware to make moving to the cloud possible, it also is funding pilots to help businesses understand private, public, or hybrid clouds. This breaks down any lingering fear, uncertainty, or doubt a customer has about migration, Jordan says.

“We can go in and demonstrate our capability to businesses. Show them how fast and easy it is and open their eyes to move to the cloud a lot faster,” he says.

As businesses head into a cloud-native future, Jordan expects rapid adoption and acceleration over the next couple of years. Customers will not only get more comfortable with migrating but also with using multiple clouds and moving between clouds to leverage pricing and capabilities, Jordan explains.

Increasingly, they will want to perform all these actions themselves. RiverMeadow will continue to support these transitions with their fixed-price, easy-to-use, self-service, multi-cloud migration platform. “That way,” says Jordan, “customers can start to move their workloads to and between any clouds on their own, with a single pane of glass.”

This article was edited by Christina Cardoza, Senior Editor for insight.tech.

Processor Innovation + Virtualization Power Edge Computing

When we look back on the early 2020s, we’ll remember how adversity compelled us to adapt, and in many cases, revealed parts of us we didn’t know we had. Globally, we have experienced shopping, attending classes, working with colleagues, and even spending time with families in a whole new light.

The same is true in the tech world. Faced with industry-wide digital transformation, electronics companies had been maneuvering for position before the COVID-19 pandemic. Then like dominos, quarantines turned into production slowdowns, which resulted in supply chain disruptions.

ASRock Industrial, an IoT edge solutions OEM, was one of them. The company was less than four years from spinning out of ASRock Inc., a provider of industrial grade products, when COVID hit. Suddenly, demand from its low- to medium-volume factory automation, robotics, and security customers became uncertain. The ASRock Industrial team realized it could no longer rely solely on its traditional off-the-shelf hardware business.

“We are moving from pure hardware to gradual value-adds,” says James Lee, President of ASRock Industrial. “We build application-ready platforms for our clients, which include the industrial PC itself, middleware, and containers to host different types of operating systems, and shared memory technology to speed up the transmission of data.”

Flexibility Is Key to IoT Edge Computing Evolution

For a hardware OEM to evolve into an IoT edge solutions provider, it needs a flexible foundation with the performance to satisfy multiple use cases. Otherwise, completely custom designs would be required for every customer, which is neither scalable nor cost-effective.

Understanding this, ASRock Industrial doubled down on its heritage of Intel® technology-based designs by adding support for 12th gen Intel® Core processors (previously known as “Alder Lake”). These new processors introduce a hybrid architecture with up to 16 Performance- and Efficient-cores that adapt seamlessly to edge workloads.

According to ASRock Industrial engineers, performance improvements in 12th gen Intel Core processors single- and multi-threaded processing combine with the platform’s real-time capabilities to enable container-based microservices on industrial automation machines.

As a result, ASRock Industrial can help customers consolidate functions like the Intel® OpenVINO Toolkit-driven AI inferencing, motion control, and other capabilities onto a single device, as it did for one industrial customer that builds an automated optical inspection (AOI) system.

Consolidating IoT Optical Inspection on a Single IPC

For years, the customer used a multi-PC setup to perform product quality inspections at a manufacturing plant. The system included a Windows-based industrial control PC for machine vision and image processing, and a separate Linux platform that ran the image inspection models by AI inference.

The 12th gen Intel Core processor introduces a hybrid architecture with up to 16 Performance- and Efficient-cores that adapt seamlessly to #edge workloads. via @insightdottech

Because the two systems had to constantly pass imaging data back and forth over a physical LAN, latency became an issue. The delays were so pronounced that eventually the AOI became the bottleneck in the entire manufacturing process. This prompted the customer to look for a new single, self-contained system architecture. Partnering with ASRock Industrial, it arrived at the 12th gen Intel® Core Desktop Series processor-powered IPC.

As shown in Figure 1, the iEPF-9010S supports all the functionality the AOI requires by hosting applications previously run on the two systems in different virtual machines. Hardware-assisted Intel® Virtualization Technology (Intel® VT) and native real-time connectivity for the application’s deterministic tasks makes this possible. Plus it’s compatible with OpenVINO, which can be used to accelerate AI workloads.

iEPF-9010S consolidates edge devices with improved performance
Figure 1. A combination of engineering expertise and the iEPF-9010S consolidates a multi-system edge computing setup—improving data throughput 100x. (Source: ASRock Industrial)

But the IPC didn’t support the AOI application out of the box. To facilitate the new environment, ASRock Industrial partitioned the workloads using a KVM hypervisor and developed a virtual LAN that replaced the physical data exchange connection. The company also designed a software tool that lets users optimize the platform even further by shared memory between the virtual machines.

In all, the AOI system’s data transmission speeds increased by 100x compared to the dual-platform configuration.

A New Era of IoT Edge Solutions & Suppliers

The outcome of the project is better than either party could have hoped for. The customer, of course, not only removed the bottleneck in its manufacturing plant but also benefitted from the reduced cost and complexity of having to manage one system instead of two. And on the surface, ASRock Industrial got a design win and a happy customer.

But going deeper, the technology enabled by 12th gen Intel Core processors probably transformed the IoT edge computing provider as much as it transformed its customer. With the processors’ inherent versatility, ASRock Industrial can now iterate on top of its own open-architecture hardware with middleware and design services that get customer solutions to market faster.

“We see this kind of customization as a trend,” Lee explains. “Right now, we’re introducing this concept to our customers because they tend to be able to deploy their software. In terms of the business right now, we will gradually add on customization and software capabilities into our products, and my expectation is after five or ten years it will be more than 30% of our business.”

This adaptability, born out of adversity, is what separates success from failure in uncertain times.

 

Listen to Inside the Latest Intel® Processors with ASRock Industrial on our IoT Chat podcast to discover even more about ASRock Industrial and how they are leveraging the latest 12th gen Intel Core Processors.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech

Standing on the Edge of Smart Retail Possibilities

Most of us grew up knowing retail and hospitality as hands-on businesses—the server handing you a menu, the ring of a cash register (not to mention cash!), perhaps even a string tied around your bakery box. But times change and needs change. And a lot of legacy businesses out there—like many shops and restaurants—still struggle with the hows, whens, and whys of the transition to digital. Enter companies like edge computing platform provider Reliant.

We talk with its CTO and Co-Founder Richard Newman about how brick and mortar is transforming into smart retail businesses with cloud- and edge-based architectures, what can be done about the supply chain problem, and the best way to transition out of legacy systems.

How is edge computing addressing the changing needs of retailers and restaurants?

Because of COVID and the pandemic, a lot of the industry has doubled down on contactless—whether that be payment, or buy online and pick up in-store, or delivery-based services and fulfillment out of physical stores or restaurants. And so a lot of older brands, ones that hadn’t invested as heavily in technology as they might have, either shrank substantially or even went out of business.

The innovative companies, the ones that were already pushing the envelope with things like autonomous shopping, self-checkout, tight integrations and delivery, advanced omnichannel-based order management—they’re the ones that did the best coming through COVID.

But brick and mortar isn’t going away by any stretch. It’s really becoming much more of a hybrid model. That has forced changes in the systems deployed in that environment, changes that require modernization and re-architecting of those actual physical systems. Take quick-serve restaurant operators. Suddenly they’re seeing a much larger percentage of orders being processed outside of their restaurants, and their current systems and the way their kitchens are configured can’t necessarily keep up unless they are investing in next-generation technologies.

How can businesses go about rethinking their systems to effect these changes?

It’s an incremental approach, but edge computing provides some great ways to do it. The virtualization of existing legacy systems can be combined. Virtual machines can run at the same time and workloads can go from a monolithic architecture and be delivered as lightweight containers. Then all of that can be connected much more closely to cloud-based systems and services. That’s how to make a start. It could be one simple application at a time, but along the way they’ll be able to develop a much more agile architecture.

And changing up the architecture, going from legacy to an edge-based approach, provides an opportunity. If you think about the way things used to be back when we were talking about data centers, it wasn’t so easy. You had to actually order a server from somebody, wait for that server to arrive, rack it, connect it to the network, provision it, and load software on it. Whereas if you’re running a cloud-based infrastructure, people think nothing of adding another virtual machine or virtual host or another set of container-based workloads in the cloud. But many operators, when they need a new system deployed in their physical store or physical restaurant, are still stuck in the older mindset.

Edge changes that. In an edge-based workload, they’ll have a foundational system—like Reliant’s platform—that will sit at their physical store or physical restaurant. And if they want to add another virtual machine, they provision it and it pops up. If they want to add another set of containers connected into a docker container registry—a set of applications they may have—all those actions are done through the cloud, and they’re up and running in the store or restaurant. It’s really that easy, and that’s the big change.

Tell me about the importance of Intel® to achieving these scenarios you’re describing.

Customers want to be able to have something that’s very leverageable, but that’s also going to be highly reliable. Suddenly they’re going to be concentrating more workloads running on what might be a cluster, and they’re going to want to have scalable capacity. That moves them off of what might be consumer-grade components and hardware to something that’s more data-center grade.

That’s where Intel® has been doing a lot of tremendous work, and the silicon it produces and the architectures that it’s supporting lead the industry in that space. And it’s coming out with new stuff all the time, which is just fantastic for these types of applications.

Are you seeing any hesitation in your customers to transform?

What often happens in the physical world in retail and hospitality is that everyone goes through various levels of hardware upgrade cycles, or network upgrade cycles, or major application shifts. So it’s at those moments of transition that they can take advantage of moving to an edge architecture instead of doing the same old thing. Take advantage of the changes they have to do, to make the changes they need to do.

“Many operators, when they need a new system deployed in their physical #store or physical #restaurant, are still stuck in the older mindset. #Edge changes that.” –Richard Newman, CTO and Co-Founder @reliantio, via @insightdottech

In most cases, when they dig in, there’s real ROI associated with moving architectures. They end up with less physical gear overall, which means there’s less to break and less cost associated with break fix. Higher reliability, better uptime. It’s easier to support and service the edge-based products, too.

How does it all work in terms of the technical complexity?

Edge-based systems should look and act like cloud-based systems, relative to how they are managed and operated. Businesses should be using easy-to-use GUI tools for configuration. They should have configuration on a highly automated basis. They should be able to use an API to manage configuration orchestration. And that mirrors cloud.

Anyone who’s been through a cloud-migration project knows what’s involved with doing it. They understand that there are new technical skill sets that come up along the way. At some point organizations have to look at what they’re doing in their physical premises, and they’re going to say, “What are things going to look like going forward? How are we going to evolve to accommodate that?” That’s where companies like Reliant come in. We’re rolling up our sleeves and helping our customers achieve success.

Where do you see edge technology being critical to this evolution?

When we look forward, we’re going to see a world where there are many more opportunities to use AI and ML to support facilitated retail, facilitated restaurant operations, etc. Let’s just pick the most obvious and easy one—walk-in-walk-out frictionless checkout. We’ve grown up in a world where we’re used to waiting on checkout lines, and having our products scanned and weighed, our shopping carts emptied out, our products handled—all by a cashier.

Machine learning and machine vision and related technologies like LiDAR and shelf sensors, and the ability to wrap all of that around in a web-to-edge-based architecture, eliminates the need for all that. The benefits are numerous, and they result in happier customers. They result in employees who are able to do jobs that involve more than just weighing a bag of brussels sprouts.

Another example—every restaurant has food prep going on. But if someone says they’re allergic to dairy yet sour cream ends up on their burrito, that’s an expensive mistake and it’s an unpleasant customer experience. But it’s really not much of a job for a camera to alert someone that there’s a problem. And it’s not just order quality, but food safety. How long has something been sitting out? What’s the temperature of a refrigerator or a cooking service? A computer can catch mistakes, or help people make smarter decisions.

Are these technologies providing some new benefits to the supply chain?

Certainly. Edge computing can provide options for getting much smarter about what physical inventory or food products are being maintained in a store or restaurant. It’s better than having a manager run around with a clipboard trying to figure out what’s going on. And the data can be real time. So it provides opportunities for more dynamic pricing. It provides opportunities to potentially tell a customer, “We don’t have this product at this store, but we have it at this other store and we’ll find a way to get you what you need.”

Of course there are challenges with supply chain that can go halfway around the world to where products are manufactured. And the technologies we’re talking about here can provide visibility into: Where is something in the manufacturing process? Where is something in shipping? How long has it been at the dock? When am I going to get it in my store? And how can I use all this data to optimize my ability to give the customer the product or service they want at the best possible price?

How is Reliant helping customers make the transition?

We’re focused on making it as easy as possible for our customers to take their existing legacy applications and run them as virtual machines. Or to define new, container-based workloads that they want to run, and plug it all together.

That puts us in a great position. Customers count on us to be smart and knowledgeable about the retail or the restaurant application stack. So we’re very conversant in payment, point of sale, signage, kitchen automation, order management, RFID—all these components that drive the modern store or the modern restaurant. We bring that business knowledge together with the customer’s requirements on edge computing on a platform that’s as open and as agnostic as possible.

Management of systems at scale becomes both an opportunity and potentially a challenge. All of these systems that are running in cloud and edge with a high degree of integrity have got to be managed. So an important part of what we try to do is to really make configuration highly automated, but also highly visible.

Another focus we have is making sure that, when customers are thinking about their workloads, they end up in situations where those workloads can run even if cloud connectivity is compromised, not available at all, or just degraded. And that’s very important, because the business needs to happen no matter what. It can’t be, “Sorry, we’re not going to be able to make your burger today because we’ve lost cloud connectivity. Our stove works, but we just don’t know how to cook a burger without the cloud-based system telling us what to do.” Resiliency is super important. It’s part of what you do with edge-based computing.

We’re really very excited about the pace of change right now. This couldn’t be a better time. Obviously COVID has driven a lot of work to us as organizations have scrambled to adapt. But coming out of COVID we’re seeing another phenomenon, which is everyone suddenly saying, “All these projects I had—for whatever reason they’ve been on hold. Now I’ve got to get going with them.” So we’re busy. It’s delightful. And I just want to keep it going.

Related Content

To learn more about edge and cloud retail experiences, read Edge + Cloud = Advanced Retail Operations and listen to the podcast Smart Retail Needs Edge Computing with Reliant. For the latest innovations from Reliant, follow them on Twitter at @reliantio and on LinkedIn at Reliantdotio.

 

This article was edited by Erin Noble, copy editor.

Why Big Memory Is a Big Deal for Big Data

Ever hear the saying “too much of anything is a bad thing”? That is exactly the case happening with data today. While information has become the lifeblood for businesses to make decisions and improve operations, the size of data is outpacing the memory available to store it. When that happens, performance and progress slow down or come to a halt.

This problem is only expected to increase as real-time workloads and more data-intensive applications are on the rise.

“We are in an age of information. The amount of data being created is significantly increasing every day. And it needs to be processed rapidly. Today’s computer systems, based on the von Neumann architecture, are no longer able to keep up with this influx,” says Jonathan Jiang, Chief Operating Officer at MemVerge, a big memory software provider.

Storage I/O Is Not the Answer

This is especially difficult in the biosciences space, where it is not uncommon to have a data set exceed a terabyte. “When data is bigger than memory, the research often cannot be completed. In many cases, the program will just report an error and exit,” Jiang explains.

To get around this, researchers have traditionally had to swap data between memory and disk. This process, known as storage I/O, results in a lot of time wasted just reading from and writing to disk. For example, when researchers are in the middle of an analysis or experiment, they store data for persistence to protect against any program failures or for future reproducibility.

While the data is being copied to or read from storage, the researcher is forced to sit around and wait for that to be completed. This can equate to hours of downtime. Additionally, if the workload fails midstream, then the researcher has lost all their progress.

“One of the fundamental bottlenecks for performance of the von Neumann model is that data needs to be moved between memory and storage. And when you need to move data between a fast media and a slow media, your performance drops. As the amount of data continues to explode, that weakness in the computing infrastructure will be more pronounced,” says Jiang.

To address this problem, MemVerge, has pioneered a new category of computing: big memory (Video 1), which allows applications to bypass traditional storage systems in favor of persistent memory. Jiang explains that this can result in a 10x performance improvement for data-intensive applications.

But for big memory to really take off, it will require software innovations like the ones MemVerge has made. The company’s snapshot technology eliminates IO to storage and recovers terabytes of data from persistent memory in seconds.

https://www.youtube.com/watch?v=843_ibMpXAI

Video 1. MemVerge is leading the next generation of in-memory computing called big memory computing. (Source: MemVerge)

Big Memory Offers Big Results

This is the exact answer Analytical Biosciences, a leader in single-cell genomics, was looking for in its research to fight cancer and help stop the spread of COVID-19. The organization found more than 50% of its multistage analytic pipeline was spent just loading data from storage.

To overcome this storage bottleneck, accelerate its discoveries, and be able to make faster predictions, Analytical Biosciences turned to MemVerge for help.

#BigMemory can result in a 10x performance improvement for #data-intensive applications. @MemVerge via @insightdottech

“Our goal and what we enable is big memory computing, which allows us to keep all the data in memory all the time, thereby eliminating that storage I/O,” says Jiang. “Even the fastest, high-end storage solutions available today is still an order of magnitude slower that what memory can do.”

With MemVerge’s big memory computing platform Machine Memory, Analytical Biosciences was able to leverage the solution’s memory snapshot technology, clone the data, and write it to persistent memory. This enabled Analytical Biosciences to load data 800 times faster, eliminate 97% of its storage IO, and reduce the overall pipeline time by over 60%.

“In another use case for snapshots, you can move workloads seamlessly between on-prem data centers to cloud data centers and between different clouds. There are many interesting new operational concepts that can be enabled with big memory,” Jiang explains.

A New Era of In-Memory Computing

In the past, it has been too expensive to put all data in memory all the time. But recent advancements in persistent memory from Intel® have made the price point much lower per gigabyte than traditional DRAM.

By utilizing Intel® Optane technology in its Machine Memory solution, MemVerge provides more capacity and persistence in memory—improving application performance, scalability, and reliability.

As applications become more data intensive and memory becomes faster, Jiang predicts every industry will change its applications to take advantage of a big-memory infrastructure.

For instance, it is critical for the financial industry to provide services with high performance and low latency. Big memory will be crucial for them to stay competitive and transport/share data faster. Big-memory computing can also help the media and entertainment industry, which deals with a lot of the same interruptions in its pipelines as the biosciences space because of its large data sets.

App developers who have made accommodations for calling data sets from storage, bringing it in bit by bit for performance reasons, will have to rethink how they write their applications. “When the application developers and the IT operations organizations shift their mindset to big-memory computing, a lot more can be done,” says Jiang.

To make it easier to adopt its big-memory technology, MemVerge provides an SDK that allows customers to take advantage of the underlying infrastructure, develop new applications, and make use of its capabilities directly.

“This will change the face of the data center. When memory can cross physical machine boundaries, the focus of applications will change. They won’t need to optimize around memory usage,” says Jiang. “When that happens, that’s when big memory will really take off.”

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

Building Smart Spaces for Communities

It’s easy to get excited about the latest technological bells and whistles, but when all is said and done, the point of innovation is to improve the real lives of real people. And technology needs to serve that goal. It’s one thing to collect masses of data, but what do we do with it then? How can we employ it to create a more livable world? Safer crosswalks, more personalized retail experiences, biosecurity for our food sources, even more usable dog parks.

We talk about how to get there with Ken Mills, Chief Executive Officer for global AI and IoT technology provider IntelliSite, and Justin Christiansen, General Manager of IoT Platform & Solution Sales at Intel®. They discuss the need to create smart communities—not just smart cities—the benefits of the as-a-service model, smart-space AI, and IntelliSite’s ongoing partnership with Intel®.

What is the difference between smart cities and smart communities?

Ken Mills: When you talk to users about being a smart city, or a digital city, or a safe city, they can’t always relate because often they’re not actually a city, right? They might represent a state agency, or a county, or a campus, or any small community. It makes much more sense to approach the market from a community perspective—a group of people coming together to find a solution around making their community safer, smarter, or more connected.

Most people equate technology with building blocks—a set of Legos that you put together to get your desired outcome. But most communities do not want to build Legos. They want to order a pizza. They want the whole thing delivered to them, hot and ready to eat. They don’t want to have to worry about how it was put together, or have the responsibility of putting it together themselves. They just want it delivered ready to go.

And we found that when most communities think about IoT or AI, they’re not ready to build the Lego pieces and worry about whether they put them together right. Did they follow all the directions? Do they have the right skills? Where do they start? They want to know that when they decide on a project, and they decide to spend their time and money on that project, that they’re going to get the outcome they expected when they started it.

And so by delivering it as a service, we’re able to ensure that they get the pizza and they’re not left with a bunch of Legos they don’t know how to put together.

What exactly is a community-as-a-service model and its importance?

Ken Mills: Being able to predicatively lock in cost and know what they’re going to get for that cost, and then also getting the benefit of having a company continue to innovate and provide additional features and functionality as those become available within that fixed cost—that’s very appealing to both sides of the equation. As a business owner, I have fixed revenue that I can count on that allows me to invest further in our technology. And those customers get the best product at all times, every time they need it.

It really is a whole new way for communities to consume technology, as well as to ensure that they’re never left behind—which is often the case in the public sector. Public-safety customers don’t often get the latest and greatest in technology. It’s a great opportunity for them; it’s a great opportunity for us. And we see this model taking off in many areas across the country and outside the US.

Justin, what are the trends you’re seeing in smart spaces from an Intel® perspective?

Justin Christiansen: Safety is one of the key focus areas for customers as they deploy smart city. But it’s also about giving people a better experience in places like stadiums, theme parks, and cruise ships. Personalizing their experiences. The adoption of AI really enables our customers to improve their business operations and the customer experience—for example, technology in areas like the retail environment providing more seamless checkout, or making sure that shelves are fully stocked.

And with COVID, the integration of robotics to minimize person-to-person interaction has also been important to providing a better experience and a safer experience. That’s true across all smart spaces.

Ken, can you lay out some of the use cases you’ve worked with?

Ken Mills: One is smart and safe intersections. As people are out and about more, communities are really looking to ensure that crosswalks are safe, reducing pedestrian fatalities in a concept called Vision Zero. There are a lot of different ways you could do that—better street marking, better lighting, intelligence for traffic lights.

Video technology is also a great tool for improving intersection crosswalk safety. You can do that with AI technology and edge computing, leveraging Intel chipsets and OpenVINO tool sets to really improve that process, oftentimes reducing the cost of deploying those technologies.

“It really is a whole new way for #communities to consume #technology, as well as to ensure that they’re never left behind—which is often the case in the #PublicSector,”— Ken Mills, @IntelliSiteIoT via @insightdottech

Another example is in smarter and safer parks. Parks are now becoming the town centers of a lot of communities, so ensuring that those parks are safe and accessible is really critical. Here’s one park example I love. If you’ve ever been to a dog park after it’s rained, or where the sprinklers were on too long, your dog is a mess, you’re a mess, the park gets destroyed, it can cost the city thousands of dollars to repair that torn-up grass, and the dog park gets shut down for a time. Everybody loses.

But by using edge-computing IoT sensors, you can analyze the soil moisture levels, soil quality, and get all kinds of great data. Then you know if it’s too wet to open and you can send a Facebook or Twitter message to say, “The dog park is closed today because it’s too wet.” You save the city thousands of dollars and reduce people’s frustration, so everybody wins. That’s a great example of where edge technology can be brought together to provide real citizen value. Simple problem, simple solution, profound impact.

We’re also seeing use cases around biosecurity, where we’re using edge AI and IoT to bring together a robotic solution to provide food sanitation and safety. These are things like killing Salmonella or E. coli or Listeria, for instance. But also improving the shelf life of the food itself so that it can be delivered farther away without having to worry about high spoilage rates. This ultimately delivers lower cost—both to the shopper and to the producer.

How are you handling all this valuable data so users can understand it and act on it?

Ken Mills: It goes back to the pizza analogy. Communities want to buy pizzas, not Legos. Through our Deep Insights set of solutions, we can deliver it all together. We can take their IoT data, their computer vision data, their time-series data and other sensor data, bring it together, analyze it, and provide real insights.

We then use our rules engine to determine what can be done with that insight. Do they just keep it and report on it for historical purposes or trend analysis? Do they act on it and generate an event or response, like in the dog park example? Or do they tie it into a third-party tool, like ServiceNow, to create a ticket: “The park shouldn’t be this wet. We haven’t had rain in the last 24 hours, so we must have a sprinkler system issue.” Then they’re going to go out and fix it—maybe proactively, maybe quicker than they normally would.

What sorts of things have been really critical to moving all this forward?

Justin Christiansen: The ability to take multiple data points is a key trend that has really played to our strength, and helped us understand how we can support customers better in IoT deployments—specifically around AI.

You think through the early IoT deployments that we have been involved in over the past five or ten years—it was often specialized equipment, specialized software being deployed to drive a specific outcome. And often that was utilizing some accelerator technology that provided the best performance for a single workload, but wasn’t capable of aggregating all the different workloads. What we’ve found is that customers don’t want to deploy different IT devices for every outcome. They want the ability to run it all on the same IT device, if possible.

And so we’ve invested in software-optimization tools to make that easier to do. We’ve invested in features such as Intel® DL Boost—which we’ve included in the CPU—that provide much better performance on an AI workload. And that drives a lot of benefits to our collective customers with the IntelliSite team because they’re able to use less expensive infrastructure. They don’t need to invest in as much IT equipment to drive multiple use cases.

There are also the current global supply chain challenges. It’s difficult for companies to even find their favorite technology at the moment. And so having the ability to run those applications on IT infrastructure that’s easy to find or that they already have has proved critical as well.

What are your thoughts on the balance between the edge versus the cloud?

Justin Christiansen: We often talk about the cloud or the edge. But it’s really the cloud and the edge. What we’ve seen from our partners is that they want the ability to provide their customers with the service they want. Sometimes that requires an edge deployment. But sometimes it’s best served from a cost or capacity perspective in the cloud. So the reality is that we need to be able to provide both to our partners, because they have to provide both to their customers. I think it really just depends on the workload that they’re running, the outcome they’re trying to drive, and the ROI associated with the type of deployment they’re looking at.

What should communities be looking for in a technology partner?

Justin Christiansen: The technology piece is what we’re focused on because we’re a technology company. But when we look at what it takes to provide these types of outcomes to end customers, it really is a large group of partners. IntelliSite’s been an amazing ISV partner. We have channel partners. There are a lot of systems integrators to deploy this equipment, and those tend to be hyper-regionalized.

At the end of the day we’re all focused on solving the end-customer’s business challenges. We’ve talked mostly about smart spaces, but it really scales across all businesses, and the outcomes that technology can drive to.

Is there anything else that the communities you’re serving should consider going forward?

Ken Mills: Often the easy button can be the most costly one. It’s important for communities of all sizes to look at the whole solution, and really make sure that they’re not getting locked into proprietary, niche solutions that are limited in their scope, and limited in their ability to impact change or bring value. And to really look for solutions that are open and flexible, and that allow for the dynamic innovation and change that are necessary over the life cycle of any technology project.

Justin Christiansen: From a technology standpoint, the scalability of the technology they’re deploying is incredibly important. What we often find is that a customer wants to deploy something at a relatively small scale to see if it works. And if it does work, they want to quickly scale it to something much larger.

I would also add the ability to be flexible in terms of what their technology can provide for them over time. We don’t know what applications or use cases a customer may want a year or two down the road, but we want to provide technology that’s capable of serving them after they’ve purchased it. If you had told me two years ago that I would actually want to be in a restaurant where there were relatively few people and we were served by a robot, I would’ve thought you were crazy, right? That’s actually a somewhat desirable state today.

Related Content

To learn more about creating smart spaces for communities, listen to the podcast Smart Spaces for Smart Communities with IntelliSite. For the latest innovations from IntelliSite, follow them on Twitter at @IntelliSiteIoT and on LinkedIn at IntelliSiteIoT.

 

This article was edited by Erin Noble, copy editor.

Interactive AI Avatars Transform Customer Service

Ask Siri what she looks like, and she’ll give you one of a few answers. She imagines she looks like colorful sound waves, or she’ll tell you she’s invisible. Sounds intriguing, but all you really see is a round icon indicating she’s listening. As our reliance on AI increases, voice alone is no longer enough.

“Human interaction with computers used to just be keyboards,” says  Pascal Bérard, Director at Animatico AG, a technology company that offers interactive avatars. “Then it evolved with the mouse and eventually touchscreens. The introduction of voice assistants enhanced the human computer interaction (HCI), but a big part of human communication is nonverbal. And that’s what is still missing.”

This need for personalization is increasing as more transactions move to e-commerce. Customers who visit brick-and-mortar locations want similar interactive experiences to what they find on the web. Retail stores have already implemented in-store digital solutions like digital signage. But complementing those solutions through engaging and talking AI avatars that act as virtual assistants with gestures and facial expressions has the power to take it a step further, Bérard explains. Customer interactions can now move from transactional to more experiential ones.

Interactive AI Avatars Convert Sales

For example, talking avatars can be used to help customers in a liquor store find the perfect wine pairing for their night or evening in a fun and engaging way (Video 1). They could also help customers pick out the perfect color and type of paint for their new home projects or find the right route when taking public transportation. And they can even be used to enhance business experiences by greeting customers when they walk in the door.

Video 1. Customers in a wine store get recommendations from an interactive digital sommelier avatar. (Source: Animatico AG)

One business is using interactive avatars to help customers smoothly decide and plan a vacation destination. The vacation booking company Hotelplan recently deployed an avatar maker solution by Animatico in its travel agencies across Switzerland. Because the agencies are inside shopping centers, the company wanted to extend service beyond the agencies’ business hours. Bérard and his team created a digital-signage solution that featured an intelligent avatar named “Tom.” Tom can provide services to customers when the business is closed as well as assist employees during working hours.

“At first, customers wonder, ‘What’s this new thing?’” Bérard explains. “But as they approach the screen, they are quickly engaged in the conversation with Tom. You suddenly see them smiling and using the system for what they need. This customer interaction is extremely valuable for a brand.”

Deploying Intelligent Avatars

Animatico leverages the founding team’s background as former Disney researchers to create engaging characters that complement retailers’ branding with its Visitor Experience Avatar solution.

“We were working on digital humans for visual effects applications, basically cloning actors to be used. But in the Disney world, we saw how the entire company leverages characters—or cartoon avatars, if you will—and their emotions to create connections with viewers,” says Bérard.

#Retail stores have already implemented in-store digital solutions like #DigitalSignage. But complementing those solutions through engaging and talking avatars with gestures and facial expressions has the power to take it a step further @AnimaticoAvatar via @insightdottech

The Animatico platform includes a set of tools that help companies design their own custom avatars, which can be robotic or photorealistic characters. Templates can be customized to complement the company’s branding, such as choosing colors, clothing, and accessories.

“The avatar itself contains all the magic, like artificial intelligence, computer vision, animation, and the output and voice component,” says Bérard. “Depending on the customer, we can create your avatar based on our templates, changing the color of the shirt, adding a logo and maybe a hat.”

Once a character is chosen, the next step is to create content that guides dialogue with customers based on a product database. Animatico often works with digital marketing agencies on interaction designs that enhance branding. As the avatar communicates, it’s programmed to use a combination of speech, body gestures, and visuals to provide an intuitive interaction.

If a more technical API is needed, such as digitizing a company’s existing mascot, Animatico offers professional services that would implement these integrations, says Bérard. The Visitor Experience Avatar solution uses Intel® hardware to enable computation at the edge. “For us, it was a very natural fit to enter into a partnership with Intel and promote our offering together,” Bérard explains. “It provides the needed rendering quality and the low latencies to bring our avatars to life.”

But it’s not just about the customer experience. Interactions with avatars can provide valuable insights based on those interactions, collecting data from customers that tie into the business’s KPI to improve operations. For instance, with the Hotelplan use case, the company can measure how many interactions converted to bookings based on the destinations suggested by the avatar.

“One of the huge advantages is that the interaction is digital, and we can record and remember all of them,” says Bérard. “We can get statistics, like the number of interactions and the customers’ age. We also see what data the customer is interested in. And we find mapping between those two.”

Digital Signage Future Trends

Bérard predicts avatars will likely play a big role as the 3D virtual world of the metaverse expands the number of digital spaces where people can meet and connect. Businesses that plan to enter the metaverse, which is considered as the next evolution of the internet, can leverage interactive avatars to serve and engage customers where they are and in new ways. With machine learning and deep learning in the computing systems, the possibilities are endless and limited only by the imagination.

“The number of uses for avatars is going to explode in the near future,” says Bérard. “We’re seeing from the current trends that interaction with computers is becoming more and more human. I think in five to 10 years from now, interactive avatars are going to be very present in our daily lives, in all types of use cases.”

 

This article was edited by Christina Cardoza, Senior Editor for insight.tech.

Smart Retail Needs Edge Computing with Reliant

Richard Newman

[podcast player]

Why are some retailers struggling while others hit double-digit growth? In a word, agility.

During the pandemic, many companies failed because they could not adapt—but others found incredible success by pivoting to new business models. And this trend is likely to continue, as the retail and hospitality sector remains unpredictable. (Just look at the supply chain issues!)

Digital transformation is a key factor in separating the winners from the losers. Businesses that make data an integral part of their physical operations gain the flexibility to make rapid changes. But many stores and restaurants don’t have a way to do this with their current tools.

That is where the power of edge-to-cloud architecture comes into play. With the latest edge computing platform innovations, the evolution of retail spaces is catching up to customer expectations in the digital age.

In this podcast, we talk about what it means to be a physical retail space in today’s digital world, the changing customer expectations and how to address them, and how edge + cloud can power smart retail omnichannel experiences.

Our Guest: Reliant

Our guest this episode is Richard Newman, Chief Technology Officer and Co-Founder of Reliant, an edge computing platform provider. At Reliant, Richard focuses on providing his customers in the retail and restaurant space the tools and technologies necessary to succeed in today’s digital transformation. Before starting the company more than 16 years ago, Richard worked as Chief Information Officer and Consultant to a number of technology companies.

Podcast Topics

Richard answers our questions about:

  • (3:24) The evolution of the retail space and customer expectations
  • (8:22) The move toward cloud-based and edge architectures
  • (10:19) How retailers should rethink their physical spaces
  • (12:09) The importance of flexible IT hardware
  • (13:11) How to overcome resistance to change
  • (14:50) The importance of transforming physical spaces
  • (15:53) Navigating around the technical complexity of edge-based systems
  • (21:24) How technology addresses supply chain issues
  • (26:20) New and better omnichannel retail experiences
  • (29:41) Where retailers can get started on their journey

Related Content

To learn more about edge and cloud retail experiences, read Edge + Cloud = Advanced Retail Operations. For the latest innovations from Reliant, follow them on Twitter at @reliantio and on LinkedIn at Reliantdotio.

 

This podcast was edited by Christina Cardoza, Senior Editor for insight.tech.

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode, we talk to a leading expert about the latest developments in the Internet of Things.

Today, I’m talking about agile retail with Richard Newman, the CTO and Co-Founder of Reliant. During the pandemic, many companies failed because they could not adapt, but others found incredible success by pivoting to new business models. This trend is likely to continue as the retail and hospitality sector remains completely unpredictable. After all, just look at the supply chain issues. So, what is separating the winners from the losers? Well, many times it lies in the area of digital transformation, and the challenge is that many stores and restaurants are struggling to apply cloud-scale technologies to their physical operations. And that’s why edge computing is such a hot topic in the space. I’m really excited to hear what Reliant is doing with its edge computing platform, so let’s get into it. So, Richard, welcome to the show.

Richard Newman: Thank you very much, Kenton. I’m glad to be here.

Kenton Williston: So tell me a little bit about what Reliant does.

Richard Newman: So Reliant is an edge computing platform provider. Our customers tend to be retailers or restaurant operators, and among folks delivering edge computing into the space we’re among the largest. We have north of 15,000 deployments to date, and work with a lot of major brands.

Kenton Williston: And what’s your role with the company?

Richard Newman: I’m Chief Technology Officer and Co-Founder. My day-to-day responsibilities consist of really all things technology when it comes to Reliant, but I’m mostly focused on our product and how our customers use our solution.

Kenton Williston: I would say as a founder you’ve got to be pretty proud of having a company you started to grow to be so big. So, what inspired you to get into this space?

Richard Newman: Well, really, it’s been all about customer demand. We have a different story than a lot of VC-funded startups. Instead, our approach to market was really based on need. When I got started in this business, it was really by being a consultant to the retail industry. And it was very quickly apparent that there was a really significant change happening in our market, and that change really was being driven by digital transformation, by the move of applications and services to cloud and to web-scale architectures. And that was leaving something behind, and that something that was being left behind was really what was happening in stores and restaurants. There was really an inability to change that architecture, it was very legacy based. As a whole, we saw just a tremendous opportunity to try to figure out how to crack the code, find a new way to deliver applications and systems into the physical premises.

So we started working with technologies that would provide virtualization and container delivery and orchestration and really strong connectivity up to clouds, into web-scale infrastructure before there was really even a name for it. The name that the industry eventually settled on was edge computing. So we found ourselves with an installed base of edge computing deployments, and a market need that was really very apparent because we had built the solution for the market. We were not one of those situations where we had a solution that was looking for a problem to solve. Our customers were very clear on what they needed and what directions we should go in with our technology. We were just following their guide, really.

Kenton Williston: Interesting. The last couple of years have got to be especially good demonstrations of having a very clear need that you needed to follow with the retail space being so heavily impacted. So, what’s been happening there? How’s that changed the needs in your customers, and how has edge computing helped ease all these difficulties?

Richard Newman: You can see it just from a business standpoint, COVID and the pandemic—really, a lot of the industry doubled down on the requirements around digital transformation and services that were focused towards things that were contactless, whether that be payment or whether that be buy online, pick up in store, or delivery-based services and fulfillment out of physical stores or restaurants. And our customers were reacting to that. They said, “We’ve got to change even faster than we were already thinking about changing.” And what you saw from what’s happened in the market is, a lot of the older brands that were not necessarily investing as heavily in technology as they might have, had either shrunk substantially, or even potentially gone out of business.

Whereas some of the innovative companies, the ones that were pushing the envelope with things like autonomous shopping, self-checkout, tight integrations and delivery, advanced omnichannel-based order management, payment—they’re the ones that did the best coming through COVID and then out of COVID eventually. What you saw, and this is not news to anyone probably listening to this, is the percentages of a lot of customers’ online orders went up, and that business was reflected in terms of what was happening in their brick-and-mortar locations. But brick and mortar isn’t going away by any stretch. It’s really becoming much more of a hybrid model, and that’s forced the systems that are deployed in that environment to change with it as well. So you see a lot more emphasis on the types of things that really enable shoppers to be connected with their products, fulfillment much easier and simpler. And on the very far end of that spectrum, you see walk-in-walk-out-based shopping starting to make its appearance across multiple markets.

But even simple things like, I need better integration with delivery services, if you’re a restaurant operator, because a significant number of my customers now really prefer to order through an app or order online, and they just want their food delivered and they want to be able to know where it is along the way. And these changes require modernization and re-architecting of the actual physical systems. So if you take a large quick-serve restaurant operator, suddenly they’re seeing just a much larger percentage of orders that they’re processing out of their restaurants, the online base. And their current systems and the way their kitchens are configured can’t necessarily keep up unless they are investing in next-generation technologies and finding new ways to innovate faster. And on a foundational basis, doing business the same old way with relatively tired, three-tier, client-server-based monolithic architectures doesn’t work. The success that they have in the cloud they want to replicate in the restaurant, and that’s edge computing, and that’s really what’s kept us so busy.

Kenton Williston: That makes sense. And it sounds like a key element here is not just to have intelligence in the stores, to have edge computing there in the restaurant, but it’s really about having a holistic architecture, not just encompassing your own business’s operations, but even your vendors’ as well. You want to think about even your vendors’ interfaces. How do you go about creating something that’s so all encompassing?

Richard Newman: That is the hard part. There’s no magic wand or secret formula to suddenly going from what might be a technology architecture that’s been in place at a large retail or a large restaurant operator for a decade or longer. It’s an incremental approach. It’s definitely not a big bang, but edge computing provides some great ways to do this. You can combine virtualization of existing legacy systems, whether they’re systems from large, point-of-sale providers or signage providers or other systems. You can run them as virtual machines at the same time, take workloads that might be running on those systems on a monolithic basis out, and deliver those as lightweight containers, and connect all of that up much more closely to cloud-based systems and services. And that’s how you start. It could be one simple application at a time, but along the way you’ll be able to develop a much more agile architecture.

Kenton Williston: So I think that word you just mentioned, agility, it seems to me like that’s really a key element of everything you’re talking about here. And again, these ideas about virtualizations and container, all these sort of things tie together to me, that are forming a picture in my mind that for a long time, people have been moving towards these cloud-based architectures, and one of the big selling points is you’re more focused on small workloads that are very flexible in terms of where deploy, how they can scale up, how they can scale down. And this has been a very successful approach in the cloud/IT space. And it sounds like a big part of what you’re saying is that same, more granular approach to the tasks that need to be done is really one of the key things so that you can put them in the right place, physically, on the right equipment, and interface with whatever it is that they need to interface with. It sounds like breaking things down to bite-sized pieces is really a critical part of all this.

Richard Newman: And changing up the architecture, going from legacy to an edge-first-based approach provides that opportunity. And it’s not difficult to get one’s arms around it. As you just mentioned, if you’re running a cloud-based infrastructure, people think nothing of adding another virtual machine or virtual host or another set of container-based workloads in the cloud. They just take it for granted that it’s part of the cloud-based services, and they’re right. Whether your cloud is Azure or AWS, provisioning another virtual system is super easy. It’s an action that can be done in a handful of seconds, typically. So if you think about the way things used to be back when we were talking about data centers, well, not so easy. You’ve got to actually order a server from somebody, wait for that server to arrive, rack it, connect it to the network, provision it, load software on it.

In the cloud, what takes us, as I mentioned, seconds was a process that took weeks to accomplish in traditional data centers, which is why cloud has become so predominant and you’ve gotten so used to it they probably forget this huge disparity. However, when you start talking about the edge or the physical premises for a lot of operators, they’re still stuck in that same older mindset. If they need a new system deployed in their physical store or physical restaurant, they’re thinking about, well, I’ve got to order another server or another system. I have to have it provisioned. I have to have it shipped. I have to have someone show up and install it. I’ve got to connect to the network. I’ve got to load software on it. All those things.

Edge changes that. In an edge-based workload, you’ll have a foundational system, like Reliant’s platform that will sit at your physical store, physical restaurant. And if you want to add another virtual machine, you provision it and it pops up. If you want to add another set of containers connected into a docker container registry, a set of applications you may have, all those actions are done through the cloud and they’re up and running in your store or restaurant. And I can’t make it any simpler than that. I mean, probably oversimplifying it a little bit, but it’s really that easy, and that’s the big change.

Kenton Williston: So to put it another way, it’s sort of rethinking how you get work done in the physical world. It’s not like, well, I need another checkout lane, so I need to buy another point of sale. It’s not, I want to boost sales, so I’m going to deploy a standalone digital signage system. It’s really thinking about the compute elements in really more of this sort of cloud-style fashion, where it’s like, well, I will have high-performance compute equipment in my local establishment, and it can do whatever I need it to do. I don’t have to, ahead of time, figure out all the workloads I’m going to put on it or buy different specialized equipment for different specialized purposes. I can think of it more as, I’ve got a pool of resources here that I can use to do all kinds of innovative and interesting things, things that I haven’t thought of yet.

Richard Newman: Totally correct. And again, you can start off with one edge system, two edge systems, and you could add more systems to a cluster as you decide you need more resources. But along the way, again, whether it’s a singleton node or a high-availability duo-node cluster, you always start with some spare capacity that allows you to provision additional systems as you need them. One of our customers, a large quick-serve restaurant operator, I think their initial application stack started with four or five containers, and now they’re running twelve or fourteen, and there’s this new workload that they’re coming up with all the time to run on them. And as they move to add more things, they’ve got plans down the road to do applications with machine learning. They can either use what resources they have or add new hardware to their cluster, which could include GPUs or TPUs to run ML-specific workloads.

Kenton Williston: And I’m assuming here that your relationship with Intel is pretty important in enabling all this. And I should mention that this podcast and the whole of the insight.tech program is an Intel production. But getting back to the question at hand, having very flexible IT-flavored hardware, I would imagine, is pretty critical to being able to achieve these scenarios that you’re describing.

Richard Newman: They definitely are. And that’s one of the paradigm shifts that we’ve seen. There was sort of a dumbing down of the physical systems that were being deployed in the store and the restaurant, and folks were getting much more towards almost composable hardware stuff that looked a lot more like consumer-grade components. But edge, though, makes you rethink that. And you start thinking about your edge-based system as being much more of a micro data center. You want to be able to have something that’s very leverageable, that’s also going to be highly reliable, because suddenly you’re going to be concentrating more workloads running on what might be a cluster, and you’re going to want to have scalable capacity. And again, that moves you all off of what might be consumer-grade components and hardware to something that’s more data center grade. And that’s where Intel’s been doing a lot of tremendous work, and the silicon that they produce and the architectures that they’re supporting lead the industry in that space. And there’s new stuff that they’re coming out with all the time, which is just fantastic for these types of applications.

Kenton Williston: So I’m curious if you see any of your customers have any hesitation to make this move because things like uptime and making sure you can actually sell the things that you’re trying to sell is so critical to retail environments. You just can’t have all of your cash registers go down. That’s a death blow. Stores have traditionally—tourist restaurants, hospitality settings—have traditionally been, just give me something that is totally bulletproof, dead simple. I don’t want to have to train my people very much. The emphasis has been more on keeping it working than keeping it exciting, I guess I could say. Are you seeing some of that pushback? And if so, what’s your response?

Richard Newman: I’d say some of it, when you start rethinking your architectures that you’re using, these businesses can’t stop for six months to be able to do a retooling of their IT infrastructure. Everything has to continue to operate. So you end up with the analogy that you’ve heard probably before, changing the engine on a Boeing 787 while the plane’s still flying. There’s a little bit of that that goes on, but again, you’ve got to start with any of these things in bite-sized pieces.

And what often happens in the physical world, in the world of retail and hospitality, is everyone goes through various levels of hardware upgrade cycles or network upgrade cycles or major application shifts. We’re moving onto either a major new version of an existing point-of-sale system, or we’re switching point-of-sale systems providers, or we’re switching signage-based systems. And it’s at those moments of transition that you can take advantage of moving to an edge architecture instead of doing a same old, same old. And that’s what we keep trying to say to help people to do it. Take advantage of the changes you have to do to make the changes you need to do.

Kenton Williston: That’s very much reminiscent of the way the online environment has changed. You just cannot survive unless you’re constantly innovating and moving forward, you will get left behind, for sure. And it sounds like what you’re saying is that’s happened now in the physical world as well. Just doing business as usual is not good enough anymore.

Richard Newman: It’s not. But even so, what we’d also point out, and in most cases when you dig in, there’s real ROI associated with moving architectures. The total cost of ownership of edge-based systems matches the total cost of owner benefits that folks have seen with the cloud, for example. You end up with less physical gear overall, which means there’s less stuff to break and less cost associated with break fix, higher reliability, better uptime. It’s easier to support and service these products too, edge-based ones, at least. And we try to drive towards that. So again, depending on the size of your operation, you may be looking at anywhere from a couple million dollars to tens of millions of dollars’ worth of investment in moving to an edge architecture. But on the other side of that is typically positive ROI.

Kenton Williston: One of the things that I think your clients might worry about is, okay, this all sounds great. But now I’ve got all this high tech stuff that looks like a mini data center, mini cloud in my facilities. Am I going to need a whole army of IT support to make this all work? And I heard you say these things were going to be more reliable, have better uptimes. So can you speak to how that works in terms of the technical complexity?

Richard Newman: Edge-based systems should look and act like cloud-based systems relative to how you manage and operate them. You should be using easy-to-use GUI tools for configuration. You should have configuration on a highly automated basis. You should be able to use an API to manage configuration orchestration, all those types of changes along the way. And that mirrors cloud. So again, anyone that’s been through a cloud-migration project, and I suspect that most of the folks listening to this podcast will have experienced that, and sometimes that was now two, three, four years ago, they know what’s involved with doing it. They understand that there’s new technical skill sets that come up along the way. But at the end of the day, the reason cloud architectures were built was because the hyperscalers, Google, Amazon, Facebook, Netflix, Apple, you name it, they couldn’t have built their infrastructures as quickly or the size they were without a lot of automation, which ultimately resulted in a much smaller number of people supporting a lot more technology with a greater overall level of reliability and the lower total cost of ownership.

So at some point, organizations have to look at what they’re doing in their physical premises, again, stores or restaurants in our world. But it could be medical offices. It could be manufacturing facilities, transportation centers, what have you. And they’re going to say, “What’s it look like going forward?” We’re all going to evolve to accept that stuff. And along the way, there are companies like Reliant. We’re rolling up our sleeves and helping our customers achieve success. There’s VARs and integrators out there with similar skill sets. It’s not a go-it-alone type of thing.

Kenton Williston: And on that point you mentioned things that are coming next, and earlier I heard you talking some about machine learning. What are some of the key areas that you see technology evolving over, let’s say, the next six, twelve months that edge technology will be critical in enabling.

Richard Newman: I’d like to look at a time frame that’s longer than six or twelve months, but let’s just pick the most obvious and easy one, walk-in-walk-out frictionless checkout. We’ve grown up in a world where we’re used to waiting on checkout lines, having our products scanned and weighed and our shopping carts, emptied out, our products handled by cashier. Machine learning and machine vision and related technologies like LiDAR and shelf sensors and just the ability to wrap all of that around in a web-to-edge-based architecture eliminates the need for it. And there’s plenty of case studies that show the cost of maintaining a cashier in a lane versus the cost of automating to things like either fully automated, walk-in-walk-out, to just assisted self-checkout, etc. The benefits are just numerous and they result, ultimately, in happier customers. They result in employees that are able to do jobs that are involved more than just weighing a bag of brussels sprouts to try to figure out what it’s going to cost a customer.

So I think when we look at the world going forward, we’re going to see a world where there’s many more opportunities to use AI and ML to support facilitated retail, facilitated restaurant operations, etc. And some of these things are really obvious when you look at, for example, mistakes that are made in food prep. Every restaurant has food prep going on in it. But if someone says they’re allergic to dairy, yet sour cream ends up on their burrito whether they wanted it or not, that’s an expensive mistake and it’s an unpleasant customer experience. But it’s really not much of a job for a camera to be able to look at an order and say, oh yeah, that order is getting sour cream, it’s not supposed to. I’m going to stop that right now. I’m going to alert the folks involved in the food prep that there’s a problem here, and I’m going to fix it before it becomes a problem for a customer.

We keep going. And those opportunities all day long, they’re just so numerous. They get into not just order quality, but they get into food safety issues. How long has something been sitting out? What’s the temperature of a refrigerator or a cooking service? All that data can be collected, and the computer never sleeps. It never makes a mistake when looking at the data, and it’s always going to be in a position to catch mistakes or help people make smarter decisions.

Kenton Williston: That’s really fascinating. I’m wondering if you can illuminate a little bit more one of the things you talked about with regard to supply chain and just ordering through apps and having a very friction transparent means of acquiring all the things that you need. What’s happening in that space that might be new and unexpected?

Richard Newman: All kinds of stuff. Some of it’s changing the way people interact with shopping and products and services. Here in New York where I’m based, there are half a dozen or more startup companies that are competing to do less than 15 minute delivery on an app basis. So these are organizations which largely are trying to crack the code to captured inventory, where they’re maintaining what are called dark stores, and all the interaction they have with their customers is over their app, but they’re guaranteeing 15 minute or less delivery of the products and services, which is just a completely new way of doing things.

And they’re looking at it and saying, “This is super efficient. Customers are going to be happier, if they need something, whether it’s a large grocery order or whether it’s simply a pint of Ben and Jerry’s, we can get it really quickly to that customer.” But again, automation really makes that possible. And from the standpoint of what’s happening, those dark stores or their analog, the ghost kitchen in the hospitality space, is automation, is quality control, is being able to match what are complicated orders with delivery requirements around it. Making sure the right product gets to the right customer at the right time.

Kenton Williston: Yeah. And what about the supply chain side? I know that’s only been another area of quite a lot of difficulty here over the last year or so. Are these technologies also providing some new benefits in that area?

Richard Newman: Well, certainly. Edge computing can provide options for getting much smarter about what physical inventory or food products you’re maintaining in your store or restaurant. It can provide better information than having a manager run around a location with a clipboard, trying to figure out what’s actually going on, and that data can be real time. So it provides opportunities for more dynamic pricing. It provides opportunities to potentially tell a customer, “Hey, we don’t have this product at this store, but we have it at this other store and we’ll find a way to get you fulfilled.”

On longer term, of course there’s issues or, I guess, challenges with supply chain that can go all the way halfway around the world to where products are manufactured. And then of course, the technologies that we’re talking about here can provide visibility into, where is something in manufacturing? Where is something in shipping? How long is it at the dock? When am I going to get it in my store? And how can I take all this data and then optimize my ability to make sure I’m giving the customer the product or service they want at the best possible price.

Kenton Williston: That sounds really good. Definitely something that’s needed at the moment, for sure. So in all of this, we’re talking about so many different elements of QSR retail, all the rest. Very complicated businesses with thin margins, a lot of moving parts, and we’re talking about pretty substantial changes to how these businesses work. So I’d love to hear just a little bit more detail of what Reliant is actually doing to bring all these many elements together and just make sense of all this data that’s being collected.

Richard Newman: You’re getting to the core of, what does Reliant do? So we are not a hardware company. We’re not an application provider of business applications, like point of sale or signage. We’ll work with many different hardware manufacturers, and we let our customers have quite a bit of say in terms of what hardware they like and don’t like. Our customers’ workloads define how much hardware, capacity, CPU, memory, RAM, all the rest of that stuff that they need. And along the way we’re focused on making it as easy as possible for our customers to take their existing legacy applications and run them as virtual machines, or define new, container-based workloads that they want to run, and plug it all together.

And that puts us in a great position. They count on us to be smart and knowledgeable about the retail or the restaurant application stack. So we’re very conversant in payment, point of sale, signage, kitchen automation, order management, RFID—all these components which drive the modern store or the modern restaurant, and we bring that business knowledge together with their requirements on edge computing on a platform that’s as open and as agnostic as possible.

Kenton Williston: One of the big questions, though, I would imagine is this all sounds great, but we’re still, at the end of the day, talking about connecting a lot of things together. And what if those connections fail, networks go down? A piece of data is missing that you were really counting on. What happens in that scenario?

Richard Newman: Great question. And it’s one of the things, again, that drives edge computing. So no one really worries about the connectivity within the cloud for the most part. It’s just expected to run. If you’ve got applications that are running in the cloud context, they’re all running locally and they operate. In edge computing, you don’t get the same guarantee of connectivity from edge, from store, from restaurant to cloud. So one of the focuses that we have is making sure that when customers are thinking about their workloads, they end up in situations where those workloads can run even if cloud connectivity is compromised, not available at all, or just degraded. And that’s very important because, again, you’re talking about customers who might be walking into a store, walking into a restaurant, the business needs to happen. It can’t be, “Sorry, we’re not going to be able to make your burger today because we’ve lost cloud connectivity.”

“Our stove works, but we just don’t know how to cook a burger without the cloud-based systems telling us what to do.” So resiliency is super important. It’s part of what you do with edge-based computing. And then the other thing I’ll point out is that management of systems at scale also becomes both an opportunity and potentially a challenge. So you’ve got to manage all of these systems that are running in cloud and edge with a high degree of integrity. So that, as you just mentioned, key pieces of data, whether they’re configuration data or data that an application requires, don’t suddenly vanish. “Oh, this isn’t working properly. I’ve lost my application. It’s not working because I now don’t know what my IP address is supposed to be.” And that’s another important part of what we try to do, which is really open up and make configuration highly automated, and also highly visible. Whether, again, you’re looking at it through a web-based GUI that we have available through our cloud, or via API.

Kenton Williston: The other thing that I’m wondering about here, we talked just a little bit about the way shopping patterns are changing, and how so much has moved to an app-based or online-based model, but yet at the same time, physical locations matter a lot. So how does this whole idea of an edge-to-cloud platform enable a better omnichannel experience that gets people enjoyable experiences, however they’re engaging with you, and also consistent experiences?

Richard Newman: Well, again, in the legacy world, you had cloud-based systems that might be driving e-com or even app-based stuff that were then connecting into legacy point of sale or legacy order management or legacy systems in stores. What edge provides is the opportunity to flip that around and start to take steps to modernize what might have been a legacy infrastructure. We’ve seen this again and again and again, where the pace of innovation in the traditional retail or traditional restaurant system is not keeping up with the pace of innovation that’s taking place in the digital cloud-based space.

So there’s a big game of catch up happening, and I’m not talking about the sauce that goes on a burger. The catch up that we’re talking about here is, how do I modernize my in-store systems quickly? How do I start to be able to, as I mentioned earlier, look at workloads that might be running on a legacy point-of-sale system and start to move them off into containers? Whether those containers are running in cloud, in store, or in both places as part of a better strategy for, as you pointed out earlier, greater agility, I’ve got to keep up with innovation.

Kenton Williston: Can you walk me through what the actual process here would be when Reliant engages with the customer? Where do things start, and what are some of the key apps that you walk through with your customers to get them to this nirvana of edge computing?

Richard Newman: It really starts with education. We’re still in the early days of edge. There aren’t specific standards. The best practices associated with edge are still being debated. And customers naturally have a lot of questions. They may have different motivations. As I said, we’ve had customers approach us who are just fed up with operational reliability within their store or restaurant location, and they want to know how to better increase it with an edge-based architecture. Other customers are looking at the wholesale transformation of some key systems, and they want to do it the right way on an edge platform. But all those things start with just conversations. “How does edge work for me? Here’s what I have today. Here’s where I’m thinking of going with my store or my restaurant systems. Reliant, what do you guys think? What are you seeing in the market? What are other customers doing? What are some of your competitors doing? How do you react to what they’re saying to the market or to us potentially?”

All of our engagements usually start with some element of design that usually goes into lab. Nobody makes an eight- or nine-figure investment in edge computing architectures without really testing it out. So we typically start with lab. Lab is usually based on customer requirements or what applications they want to run on, what type of network they want to have on what type of hardware is going to be deployed. And they look at all kinds of things, including operational reliability, performance, etc. And once they’re satisfied that it’s taking them in the right direction, we usually typically move to a pilot.

And depending on the size of an operator, a pilot could be a couple stores, or it could be a couple hundred. We’ve got some customers who approach 10,000 or more locations. And a good pilot for them is 300 or 400 locations before they start feeling comfortable with a commitment to deploy 5,000 or 10,000 locations. So this entire exercise lab typically takes place within a single calendar year. Sometimes it can be less than that. And actual production deployment at scale will then start after they’ve reached this degree of satisfaction and the solution that’s been proven out through pilot.

Kenton Williston: What would you say to a retailer, QSR, c-store, what have you, that’s thinking about pursuing some of these technologies and just isn’t quite sure when, where, and how to get started? What would you suggest they do as a first step?

Richard Newman: Read. The analysts we work with, folks over at Gartner and we know the folks over at Forester have a lot to say on this—their fingers are right on the pulse of what’s changing. And not just from a horizontal technology basis, but they have industry specialists, too, that understand what edge is doing to retail and hospitality. There’s also just a wealth of information online. Some of that can get confusing, because you’ve got different vendors that may have different agendas to promote regarding how they perceive edge. Those visions tend to line up with their products very neatly, whether their product is a legacy, repurposed, hyperconverged infrastructure solution that they’re now calling edge, or they might be a very large technology provider that has a very specific vision of how they think the world should work, such as IBM with their Watson architectures, relative to AI or ML.

But once you get past the noise, you start to have the conversations. You start to understand more. And again, if you want to call it a sales process, our sales process always starts with education, and we do our best to try to be available to anyone that’s interested. We try to show up at all the major trade shows, addressing retail, hospitality, in other markets. And we just start with saying, “So, what do you have? What do you think? What are you looking to do? What are you looking to change? What are your points of pain today?”

Kenton Williston: Totally makes sense. So, Richard, I have to say you’ve just plowed through so much information. It’s just been amazing. I can’t think of anything else to ask you. Is there anything you wish I had asked you?

Richard Newman: Well, I want to make sure folks know where to find me. We’re at Reliant.io. That’s Igor, Oscar, Reliant.io. I’m available on LinkedIn as well as a couple other social media vehicles like Twitter, etc. But beyond that, we’re just really excited with the pace of change right now. This couldn’t be a better time. COVID has driven, obviously, a lot of work to us, as organizations have scrambled to adapt. But coming out of COVID, we’re seeing another phenomenon, which is suddenly everyone is saying, “All these projects I had, for whatever reason they’ve been on hold, now I’ve got to get going with them.” So we’re busy. It’s delightful. And I just want to keep it going.

Kenton Williston: Well, Richard, I want to thank you again for your time today. This has been an absolutely fascinating and educational conversation.

Richard Newman: Thank you, again, so much for being moderator for this. This is wonderful.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from Reliant, follow them on Twitter at ReliantIO and on LinkedIn Reliant.IO. If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

 

This transcript was edited by Erin Noble, proofreader.

AI Medical Diagnosis Solutions Transform Healthcare

While human beings have always strived to advance as a species, today’s technological innovations allow us to make unprecedented leaps—especially in the field of medicine.

The Italian expression “tali sono gli occhi, tale è il corpo” roughly translates into “such are the eyes, such is the body.” But it was in ancient Greece where Hippocrates conducted early research on the human eye, as he believed that it offered a window into health and well-being. To this day, scientists and physicians continue to refer to the Corpus Hippocraticum.

That same desire for discovery and progress has given way to digital health technologies that have led to advancements like AI medical diagnosis solutions, which have dramatically improved our quality of life. Combining ancient wisdom with modern innovations, Jan Hlaváček—an engineer who has been passionate about medical software since he was 16 years old—understands how technology can be harnessed to improve life on a global scale. In fact, Hlaváček, as COO, leads Aireen, a provider of AI-based screening medical devices, based in the Czech Republic.

“It will be a big challenge to ensure the necessary level of healthcare,” says Hlaváček. “As there is more and more work for healthcare professionals, there is an increasing need for more medical expertise. Healthcare should innovate itself. It’s very important.”

Developed hand in hand with the ophthalmology experts at Military University Hospital in Prague, Aireen’s solution aims to facilitate diagnosis of diabetic retinopathy, age-related macular degeneration, glaucoma, and Alzheimer’s disease.

“As there is more and more work for #healthcare professionals, there is an increasing need for more #medical expertise. Healthcare should innovate itself. It’s very important” – Jan Hlaváček, Aireen via @insightdottech

AI and Computer Vision in Healthcare Applications

From a patient’s first interaction with the healthcare system—whether with a family physician or an optometrist—this platform serves as an invaluable diagnostic resource.

Aireen’s solution, trained with more than 1.5 million fundus camera images—which capture the back of the eye—can provide over 99% sensitivity when analyzing a new retina image. This level of accuracy represents a substantial and compelling contribution to the diagnosis. Since it’s being generated with mathematical rigor, it is free from the emotional conditioning typical of the process led by human beings.

Healthcare providers need only connect a fundus camera to the Aireen solution, which is available on the cloud or easily installed on-premises, to leverage the power of AI in their diagnoses.

The software respects the privacy of sensitive data, with encryption practices that ensure the inviolability of the data to a reasonable extent. And this is without the need to install other devices, apart from the fundus camera and a standard computer—or installed on-premises in larger medical facilities.

In the case of cloud services, you can upload the image coming from the camera using a simple, dedicated web interface and press the button to start the analysis. In a short time, the system generates a report that can be downloaded by the practitioner. That’s all.

Optimizing AI-Based Diagnostic Solutions

The solution is built upon artificial intelligence, computer vision, and Intel® deep-learning technology, which enables users to analyze the fundus noninvasively, and painlessly screen patients based on digital scans of the retina. The image is then processed by proprietary algorithms that leverage the Intel® OpenVINO toolkit.

OpenVINO’s ability to optimize AI models for high-speed, efficient performance on Intel® hardware brings a new dimension to healthcare applications. It enables solutions like Aireen’s to offer diagnostic accuracy in a matter of minutes, demonstrating the benefits of AI applications in medical diagnostics and healthcare.

“This is just the beginning of a transformative journey where AI, enhanced by OpenVINO, paves the way for a future where healthcare is more accessible, personalized, and effective,” says Anisha Udayakumar, AI Evangelist at Intel. “The synergy of AI and Intel technology in healthcare is not just about innovation; it’s about creating a healthier, more hopeful world for all.”

Furthermore, Intel software is helping play a critical role in other AI-based medical diagnostic solutions. Segmentation is essential for isolating specific areas of interest in medical images, which is a key step in many diagnostic processes. By further optimizing these models with techniques like quantization, OpenVINO significantly improves model efficiency and performance. This is highly relevant in healthcare, as it demonstrates the application of AI in processing complex medical images efficiently, aligning with the broader theme of using AI models to enhance diagnostic accuracy and speed in healthcare, Anisha Udayakumar explains.

Enhancing Medical Diagnostics

The solution also leverages the DICOM standard, which guarantees interoperability between the local device and the solution. DICOM—Digital Imaging and Communications in Medicine—is the international standard for medical images and related information. It defines the formats for medical images that can be exchanged with the data and quality necessary for clinical use. At the end of the process, the system deletes the provided image to respect the privacy of sensitive data.

“The DICOM protocol is implemented directly in fundus scans,” says Hlaváček. “It allows us to automatically receive data from the scans to our app. We can also send a report to the application, which subscribes to it. Plus you can create a request for examination, with a unique ID for all data, which is produced during the workflow.”

To be clear, the goal is not to serve as a substitute or a replacement for medical expertise but rather to complement it. Aireen’s solution enhances practitioners’ diagnostic skills and frees their time to come up with treatment plans, respond to psychological challenges, and engage authentically and empathetically with their patients.

Digital Health Transformation: The Time Is Now

According to the latest research issued by the World Health Organization and other institutions, more than 500,000,000 people have been diagnosed with diabetes worldwide, and this alarming number is expected to grow by 50% over the next 15 years. Underscoring the significance of Hlaváček’s innovation, diabetic retinopathy and age-related macular degeneration are the most frequent cause of blindness among European descendants.

Bearing that in mind, it is not only timely but also urgent to facilitate diagnosis on a broad scale. Aireen’s solution, together with Intel technology, holds tremendous promise. As a CE certified medical device, it is already in daily use among Czech Republic and Slovakia ophthalmology clinic customers.

“Seeing the fusion of AI with healthcare, particularly in innovative solutions like Aireen’s AI-based Retina Eye Diagnostic tool, is truly a source of excitement for me,” says Anisha Udayakumar. “Aireen’s application of AI for more accurate and quicker diagnostics not only demonstrates the immense potential of this technology but also highlights the revolutionary role of Intel’s OpenVINO toolkit in healthcare technology.”

And as the Aireen platform provides convenient access to early diagnosis, leading to treatment can help address the urgent shortage of ophthalmologists across the world.

If Hippocrates were alive today, he would, no doubt, be entranced by our medical advances and modern technologies, and would be eager to build upon the Corpus Hippocraticum for future generations.

 

This article was originally published on February 18, 2022.

This article was edited by Leila Escandar, Editorial Strategist for insight.tech.

Smart Spaces for Smart Communities with IntelliSite

Ken Mills & Justin Christiansen

[podcast player]

Smart cities are nice, but what about smart communities?

By focusing on people who live in cities, the same technology could be used to transform public spaces and improve daily lives. For instance, warnings could be sent to oncoming vehicles about pedestrians in the crosswalk. Or pedestrians could be alerted to vehicles approaching at high speed. Parents could even have peace of mind knowing their local park is protected with real-time monitoring.

With the right technology partner, there are so many opportunities to make communities safer, smarter, and more connected. In this podcast, we explore what a smart community encompasses, the benefits for both citizens and cities, and the technology partnerships that go into making this all possible.

Our Guest: IntelliSite

Our guests this episode are Ken Mills, Chief Executive Officer for global AI and IoT technology provider IntelliSite, and Justin Christiansen, General Manager of IoT Platform & Solution Sales at Intel®.

At IntelliSite, Ken focuses on delivering safer, smarter, and more connected solutions to communities through AI and IoT technology. He is also CEO for EPIC IO Technologies, the partner company of IntelliSite.

At Intel, Justin and his team focus on enabling technology to scale across multiple verticals. He joined Intel in 2005 and was the Director of Strategy Business Development before joining the company’s sales and marketing group.

Ken and Justin answer our questions about:

  • (2:47) Smart communities versus smart cities
  • (4:48) What a smart community philosophy looks like in practice
  • (7:13) Where AI and IoT fit together in smart spaces
  • (9:48) Smart community use cases and trends
  • (14:06) How to provide valuable data to the community
  • (17:00) Technology evolutions making smart spaces possible
  • (24:15) IntelliSite’s Safer, Smarter, Connected Communities as a Service
  • (28:49) What communities should look for in a technology partner
  • (31:00) Smart-space considerations for 2022

Related Content

For the latest innovations from IntelliSite, follow them on Twitter at @IntelliSiteIoT and on LinkedIn at IntelliSiteIoT.

 

This podcast was edited by Christina Cardoza, Senior Editor for insight.tech.

 

Apple Podcasts  Spotify  Google Podcasts  

Transcript

Kenton Williston: Welcome to the IoT Chat, where we explore the trends that matter for consultants, systems integrators, and enterprises. I’m Kenton Williston, the Editor-in-Chief of insight.tech. Every episode we talk to a leading expert about the latest developments in the Internet of Things. Today I’m talking about the idea of smart communities with Ken Mills, CEO of IntelliSite, and Justin Christiansen, the General Manager of IoT Platform & Solution Sales at Intel®.

We often hear about the term smart cities, but what this fails to encompass is the community living in those cities. The benefits of technology aren’t limited to governments. No! Smart connectivity can transform public spaces in ways that deeply improve the everyday lives of citizens. In this podcast, we’ll explore what a smart community really means, the benefits for both citizens and cities, and the technology partnerships that go into making this all possible.

But first, let me introduce our guests.

So, Ken, I’d like to welcome you to the podcast.

Ken Mills: Thank you for having me.

Kenton Williston: And, can you tell me about IntelliSite, and what your role is there?

Ken Mills: IntelliSite is an AI, IoT company, and I am the CEO of IntelliSite, as well as of EPIC IO, the parent company. IntelliSite focuses on delivering outcomes based on AI technology and IoT technology brought together, often referred to as AIoT. We focus on computer vision, sensor data, and sensor fusion between video and traditional sensor data, bring it together for meaningful outcomes for our customers.

Kenton Williston: Yeah. Spent a little time on your website and it’s very interesting, the range of work you’re doing. I’m looking forward to digging into that a little bit deeper, but first I also want to welcome Justin to the program.

Justin Christiansen: Thank you for having me.

Kenton Williston: Tell me a little bit about your role at Intel.

Justin Christiansen: I’m in our IoT sales organization, and my team’s focused on platforms and solutions, which really means enabling technology that can scale across multiple verticals. We focus on things like video technology, display-and-payment technology, ruggedized devices, and robotics. I mean, we’ve got to focus on AI software enablement as well.

Kenton Williston: Right. Well, it’s great to have both of you here. And I’m looking to hearing from each of you individually, and beyond that how Intel and IntelliSite are working together. And I’d like to start that conversation with you, Ken, by talking about your philosophy. So, again, like I said, I spent some time on your website, and really interesting point of view on video. And one of the things that really stood out to me is the way that your company’s talking about smart communities, instead of just smart cities. So, can you tell me what is the difference in your mind between these two, and why does it matter?

Ken Mills: It’s a great pull that you found from the website. And it’s definitely something that we focus on, and it actually started at my time at Dell, when I led the business around computer vision, and safety and security, and worked with Intel there as well. We would meet with different constituents from counties, states, towns, campuses, stadiums, all these different organizations. And when you talk to them about being a smart city, or a digital city, or a safe city, they often cannot relate directly because they’re not often a city, right? They might be a state agency, they might be a county, they might be a small community, they might be a campus.

So, we felt that it made much more sense to approach the market from a community perspective, where it’s much more inclusive to all the different types of users and constituents we might be talking to around our technology, right? Because it’s a community of people that often are coming together to solve a solution around making their communities safer, making the community smarter, or even providing more of a connected community so constituents could have access to services that are necessary to operate day-to-day.

Kenton Williston: Yeah, I love that. And I totally hear what you’re saying there: it’s easy as a gearhead myself to get really excited about all the technology and cool new bells and whistles that come out all the time. But at the end of the day, what we’re really trying to do here is serve humans and make humans’ lives better. And I really like the idea that you’re putting forward here about thinking of communities not just from a practical standpoint of, “Hey, maybe you’re not a city per se.” But also just the, “Hey, at the end of the day, this is about helping people have better lives.” I think that’s great. And, I’d really be interested in hearing a practical example of what that means. And one that has come up in my preparation for this podcast, some work you did with the city of Riverbank, California. So, I’d love to hear how your philosophy informed that work, and what exactly that work was in the first place.

Ken Mills: It’s a great example. And there’s a number of other good ones, but the city of Riverbank is a great partner, a great community that is a city that’s also part of a set of other cities, a part of their county, where we’re working with the county and all the cities in that county to deliver same or similar services around safer, smarter, more connected communities as a service, which is the SCaaS offering that you referenced earlier, and what led us to partner with that city. And it’s very common that we get these types of requests, and I equate it to, we all were kids at one point, or have kids, or both hopefully, and we’ve built or played with Legos or at least all seen Legos, one way or another. And they seem like a great idea, right? And most people equate technology to a set of building blocks or a set of Legos that you put together and get your desired outcome.

But most small cities, or even big cities for that matter, but most communities do not want to build Legos. They want to order a pizza. They want the whole thing delivered to them, hot, ready to eat, and tasty, and not have to worry about how it was put together or have the responsibility of putting it together. They want it just delivered, ready to go. And we found that most cities, when they think about IoT, or they think about AI, or they think about how do they do both of those things, they’re not ready to build the Lego pieces and worry about: Did they put it together right? Did they follow all the directions? Do they have the right skills? Do they have the right time? Or even where to start. They want to know that when they decide on a project and they decide to spend their critical assets of time and money, that they’re going to get the outcome that they really expected when they started the project.

And so, by delivering it as a service, we’re able to partner with communities to ensure that they get the pizza, and they’re not left with a bunch of Legos that they don’t know how to put together.

Kenton Williston: Yeah, that totally makes sense to me. And, Justin, I’d love to hear from you. I’m betting that this experience is something that you have shared in your work with partners and customers, that there is a trend towards really wanting a more complete solution. So, I’d love to hear your thoughts on the larger trends you’re seeing in that regard in smart spaces. And, for that matter, some of the key technologies that Ken’s already introduced, such as AI, and IoT, and how this all fits together.

Justin Christiansen: Sure. I think Ken did an excellent job of talking about some of the benefits, the simplicity, and the pizza analogy. Safety is certainly one of the key focus areas of customers as they deploy smart city. But also smart spaces technology. So you can think through things like stadiums, theme parks, cruise ships, and it’s not just a focus on safety, it’s about giving a better experience, whether that’s to citizens or customers. Personalizing those experiences. It’s more than just being able to do public safety, but I think public safety is at the heart of what our customers are interested in.

As you mentioned, the adoption of AI really enables our customers to improve their business operations and customer experience with technology in areas like the retail environment, providing more seamless checkout experiences, or making sure that shelves are fully stocked. As well as, with COVID, the integration of robotics into those experiences to minimize person-to-person interaction. And what we see in smart spaces is increasingly interaction between those robots, the people, whether that’s, again, in a warehouse or a restaurant, to provide a better experience, a safer experience. And that’s true across all smart spaces.

Kenton Williston: Yeah. That makes sense, and I agree. And it’s been very interesting. I think one of the things that has been fascinating to me, about especially the impact of the pandemic, is how many of these technological changes are making that leap from the stuff that I’m interested in as a professional and as, like I said, a geek, to things I’m directly experiencing in my own life.

I went to get a booster shot the other day. And there in the city offices were all these kinds of technologies we’re talking about. Cameras making sure things weren’t too crowded, a temperature check station, which was powered by a video camera—you look in the screen and it evaluates you to see if you’re healthy or not. I think we’re all getting to have our firsthand experience, like you said, even about the sporting venues, of these technologies having a really meaningful impact on our day-to-day lives. So, Ken, I want to come back to you a little bit, again, reflecting on your experience with Riverbank and other customers. Can you lay out some of the use cases you’ve worked with, and what some of the important trends are that you’re seeing?

Ken Mills: Yeah, great question. And some of the use cases we’re seeing are smart and safe intersections, right? So, as people are out and about more than ever, people and communities are really looking to ensure that the crosswalks are safe, and reducing pedestrian fatalities down to zero in a concept called Vision Zero. We partner with communities to really help them accomplish their Vision Zero goals. And Vision Zero is a concept around reducing pedestrian fatalities down to zero—pretty noble concept. And there’s a lot of different ways you could do that, with better street marking, better lighting, traffic intelligence for the actual traffic lights themselves. There’s lots of ways that that can be accomplished.

But video technology is also a great tool for these communities to improve on their intersection crosswalk safety—to understand how their crosswalks are being utilized to potentially warn oncoming vehicles that someone’s actually in the crosswalk and to be more aware. Or to use AI to warn someone who’s in the crosswalk that a vehicle might be approaching at a high rate of speed, and they need to be aware that that person may not be paying attention, may not stop in time, so that both sides of the equation of the relationship there can be adjusted to prevent a fatality, which ruins two lives at the very least. And it’s one of those things that communities are looking at across the board, right? And there’s ways that you can do that with AI technology and edge computing, leveraging Intel chipsets and OpenVINO tool sets to really improve that process, and oftentimes reduce the cost of deploying those technologies and getting those results all together.

Another example where we’re using some of the same technology is at smarter and safer parks. Your parks are now becoming the town center of a lot of communities. Park utilization is at an all-time high. People are not interested in being at home, and they’re not interested in being in a crowded shopping center as much as they used to be. And people want to go spend more time outside, more time in parks. My family is a great example of that. We spend a lot more time in parks than we ever have before. So ensuring that those parks are safe and accessible and really are providing the tools and services necessary for all the constituents to be able to use is really critical for communities to really provide the primary service of being a safe and smarter community to live. So we’re seeing a lot of use cases around safer and smarter parks, for example.

And one specific one that cracks me up is for the dog lovers out there. If you’ve ever been to a dog park after it’s rained, or where maybe the sprinklers were on too long and it was a little bit too wet, your dog is a mess, you’re a mess, the park gets destroyed, and it can cost the city thousands of dollars to repair that torn-up grass, and the dog park gets shut down. And everybody loses.

But, using edge-computing IoT sensors, you can analyze the soil moisture levels and even soil quality, and understand better the irrigation or fertilization rates and get all kinds of great data. But very simply put, you can know if it’s too wet to open, and you can send a Facebook message or a Twitter message, or all the above and say, “Hey, the dog park is closed today because it’s too wet.” And then, as soon as the moisture levels drop down, you could fire off a message that says, “The dog park’s open.” And you save the city thousands of dollars, reduce people’s frustration, ensure that the park was open for as much as it possibly could be, right? So everybody wins. So, that’s a great example of where edge technology can be brought together to really provide real citizen value.

Kenton Williston: Yeah. I love that example. That’s great. I have a Scottish Terrier, and boy, oh boy, let me tell you, anything messy at all in the outside world, she’s just a little mop. I love the idea of reducing my cleanup work a lot. That would be a big plus for me.

Ken Mills: Simple problem, simple solution, profound impact.

Kenton Williston: Exactly. Exactly. So, one of the things that’s coming to mind as I’m hearing you talk about the diversity of these use cases and the diversity of the organizations that you’re working with—you’ve got a lot of complexity on both ends. So how are you bringing all of this really valuable data into a place where those users can understand it, act on it, share it with their community?

Ken Mills: It goes back to the pizza analogy, right? Communities want to buy pizzas, not Legos. And by providing an end-to-end solution or a whole product for our customers, through our Deep Insights set of solutions, we can deliver it all together, right? So you can take your IoT data, your computer vision data, your time series data, and other sensor data, bring it together, analyze it, provide real insights from that data.

And then use our rules engine to then determine what you do with that insight. Do you just keep it and report on it for historical purposes or trend analysis? Do you act on it and generate an event or response, like the dog park example I gave you? Or do you tie it into a third-party tool, like ServiceNow, to create a ticket that says, “Hey, the park shouldn’t be this wet at this time. We didn’t have rain in the last 24 hours. So we must have a sprinkler system issue.” So I’m going to enter a ServiceNow ticket request to our irrigation department, and they’re going to go out and fix it, maybe proactively, maybe quicker than they would normally. And automate that process and understand how to do that, right?

So, it’s gathering the data through connectivity. It’s ingesting that data through our Deep Insights platform. It’s enriching that data through our AI stack. And then delivering insights to the customers to ultimately get the outcome that they would like. We call it our CIEIO framework. So if you’re an “Old MacDonald had a farm” fan, EIEIO, you’ll never forget it, right? CIEIO is the concept that we believe really helps customers deliver on the outcomes that they want.

And it all begins with connecting the data, ingesting that data, enriching the data, getting insights from that data, and ultimately delivering the outcome. And we’re seeing that not only across community use cases, but smart spaces, as Justin referred to, or even a new technology solution that we’ve entered into around biosecurity, where we’re using edge AI and IoT and the robots that you mentioned earlier to bring together a robotic solution to provide food sanitation and safety measures, to not only make sure that we kill things like salmonella or E. coli or listeria, but that we can also look at improving the shelf life of the food itself, so that it can be delivered to farther places without having to worry about high spoilage rates, and ultimately deliver lower cost, both to the shopper as well as to the producer.

Kenton Williston: So, Justin, hearing all of these really interesting examples from Ken is making me wonder, as a technology provider, Intel is doing a lot of work to power these applications. And I should pause here to note that this podcast and the larger insight.tech program are produced by Intel. So, I’m interested, from Intel’s perspective, what you see happening under the hood, as it were, with the evolution of technologies to make all these advances possible. So one of the things, for example, Ken mentioned earlier was OpenVINO. And of course, you have all this great silicon. So, what are the sorts of things that, from your perspective, have been really critical to moving this all forward?

Justin Christiansen: One of the key trends that Ken highlighted that I think has really played our strength and helped us understand how we can support customers better in IoT deployments, and specifically around AI, is that ability to take multiple different data points. You think through the early IoT deployments that we were involved in over the past 5 or 10 years, it was often specialized equipment, specialized software being deployed to drive a specific outcome. And often that was utilizing some accelerator technology that provided the best performance for a single workload, but wasn’t capable of aggregating all of the different workloads that Ken talked about. And what we’ve found is that customers don’t want to deploy different IT devices for every outcome they’re trying to drive from a software perspective. They want the ability to run it all on the same IT device, if possible.

And so we’ve invested in software-optimization tools to make that easier to do, to provide better performance on our technology. We’ve invested in features such as Intel® DL Boost, that we’ve included in the CPU, that provide much better performance on an AI workload. And we’ve made it easier for developers to use the CPU, the integrated graphics, or accelerator technology, all under that framework. And that drives a lot of benefits to our collective customers with the IntelliSite team, because they’re able to use less expensive infrastructure. They’re able to invest in less IT equipment to drive multiple use cases. One of the things Ken and I have talked about a lot recently is the global supply chain challenges. It’s not just harder to find your favorite food or clothes, it’s also difficult for companies to find their favorite technology. And so having the ability to run those applications on easy-to-find, or existing IT infrastructure you already have has proved critical as well.

Kenton Williston: The point you made about supply chains is something that’s been coming up in a lot of recent podcasts. And I think one of the benefits Intel offers is having this very standardized, IT-friendly form, which means that, to your point, rather than having to go get a bunch of different hardware, which not only is expensive but at the moment can be difficult to even accomplish, you can do everything on a unified platform that just makes everything a whole lot easier. So, while we’re talking about how great Intel is, Ken, I’ll let you continue the theme here. And I’d like to hear what your rationale is for using Intel-based technology. And, more broadly, how you’ve benefited from working with Intel.

Ken Mills: I think Justin hit a lot of the high points, right? I mean, having the flexibility as a business owner to choose which platform is the right platform for my customer needs is super important. I mean, if we’re honest, customers really don’t care about which inferencing platform is purchased and which AI model we deploy. They ultimately want an outcome, they want a solution. And having Intel as a partner allows us to find the most compatible solution, the most flexible solution. And oftentimes, to the supply chain comment that Justin made, the most available solution to meet the customer’s timeline and need, so that we can ensure they get what they ultimately need, which is a meaningful outcome that they can make business decisions on and really rely on.

So, we’ve seen this with our ability to deploy all the way to the edge at a smart and safe park, as we’ve mentioned. We’ve seen it all the way in a big data center for a major stadium project, or a big multitenant project, or in the cloud, or a combination of all three, with some of our customers. And even in a solution like the robot for biosecurity, as I’ve mentioned, we have edge inferencing taking in IoT data, as well as doing deep learning analysis of the data coming from those IoT sensors to determine how to deploy the ionization platform from the robot. That’s all driven by edge computing, right? So, many different form factors, many different needs from a customer perspective, many different timeframes, timelines, and expectations that have to be met. And having the flexibility in the portfolio options, as a business owner and a business leader, that Intel provides gives me the confidence that when the customer asks a question, our first answer can be, “Yes.”

Kenton Williston: Yeah, I love that. And one of the other things I heard you mention that I think is very important is the evolution of how computing is done from the perspective of where the compute infrastructure physically lives. There’s been a lot of discussion over the last few years about cloud computing. And I think there’s a very important role for cloud computing and data center deployments, like in the stadiums you mentioned. But also use cases where edge computing, i.e., putting the hardware very, very close to the sensor is very, very close to the space you’re monitoring. I think that’s becoming increasingly important, particularly as the growth of AI is just going everywhere, right? You just need a lot of computing horsepower to be able to do things like execute machine vision algorithms. So, Justin, I’d be interested in hearing your thoughts on where you see this balance between the edge versus the cloud data center, all these things going for public spaces, and what are the things end customers might want to keep in mind as these computing models are evolving?

Justin Christiansen: It’s an interesting question. And I feel like we often talk about the cloud or the edge. And it really is the cloud and the edge. What we’ve seen from our partners and our customer is they want the ability to be able to provide their customers with the service that their customers want. Sometimes that requires an edge deployment. But sometimes it’s best served from a cost or capacity perspective in the cloud. And oftentimes we see portions of the pizza, if you will, as Ken has called it, but that full solution, that are optimized for the edge for things like latency, or for applications that require a lot of memory, but they’re still integrated with portions of the application running in the cloud or dashboarding running in the cloud. And the reality is we need to be able to provide both to our partners, because they have to provide both to their customers. I think it really just depends on the workload that you’re running, the outcome you’re trying to drive, and the ROI associated with the type of deployment you’re looking at from an edge or cloud perspective.

Kenton Williston: One of the interesting things you mentioned there was about the dashboarding, and all these concepts from the cloud world of doing things as a service. And, Ken, you mentioned the as-a-service model that you’re offering now. And I’d love to hear a little bit more about what that is exactly, and why you’re taking your offerings in that direction.

Ken Mills: You can get coffee-as-a-service from Panera, right? You can sign up for a taco-as-a-service from Taco Bell. There are as-a-service options all around us. Being able to predicatively lock your cost in, and know what it’s going to be, and know what you’re going to get for that cost, and then also getting the benefit of having a company continue to innovate and provide additional features and functionality to you as they become available within that fixed cost is very appealing to both sides of that equation, right? Me, as a business owner I have fixed revenue that I can count on that allows me to invest further in our technology, further in our automation and AI and IoT sensor data and dashboards and all the things that we do, because I know that I have customers that are paying me for that service, and it really motivates me to continue to invest. And those customers get the benefit of that by getting the best product at all times, every time that they need it.

And a great example of that, we had a customer recently where we deployed the solution a handful of months ago, and we’d done some significant additions to features, and UI changes, and added some additional functionality that we thought was really useful to them. So we set up a call and walked them through some of the things that we’re doing and asked if they would like us to turn those features on or set those up for them. And they said, “Oh, I didn’t know you guys could do that.” And we explained to them, “Yep, it’s part of your as-a-service. You’re paying us for the latest, greatest functionality that we develop as a service. So as we make them available to other customers, they’re also available to you.” And you could just see the customer light up, like, “Oh yeah, this totally makes sense. I’m really glad we did that, because now I get these features that I don’t have to go back and get additional funding for, because they’re available and they’re already built into our as-a-service agreement.”

So, we’re very excited about that model. And we’re very excited to have the capital internally to be able to fund that model, right? Because as a business owner you have to prefund all that development, prefund the hardware that goes into these projects, on the promise that it will be paid for over time. And so having that ability and having the financial backing of our investors and our broader company gives us that flexibility to provide that service to our customers. We’re very excited about it, as you could tell.

Kenton Williston: Yeah, absolutely. And that does strike me as being a pretty—and I hate to use this word because it’s overused, but—revolutionary shift in thinking about how to do things like public safety, right? Because those have always been things where you needed another person, you needed another camera, you needed another whatever it was to scale things up, or to do anything differently. And what you’re putting forward is a very different way of looking at things, where it’s like, “Hey, you’ve got this incredibly powerful, capable hardware, and this connection to this dashboard that can be updated. And we can just deploy something new when we come up with something new.” And I think that really opens up a whole new world of possibilities.

Ken Mills: Absolutely. It’s giving the communities access to the best technology when it’s available, and knowing they have a fixed cost of getting that technology. So it really is a whole new way for them to consume technology and ensure that they’re never left behind, which is often the case for public sector, public safety customers is that they don’t often get the latest, greatest opportunities in technologies. And by providing it to our smart and safe community as a service, we’re able to ensure that they are always at the leading edge, or as leading edge as they want to be without having to worry about going back and getting additional budget approval for that functionality. So it’s a great opportunity for them. It’s a great opportunity for us, and we’re very excited to have it. And we see it taking off in many areas across the country, and even outside the US for that matter.

Kenton Williston: Yeah, absolutely. And, again, one of the things I really like about this model is you don’t have to be in LA, or New York, or London to get this stuff. Like you said earlier, you’ve got customers that are much smaller communities or groups of communities that are trying to work together. And it means that they can get the same amazing technology that’s happening in those huge metro areas of millions and millions of people.

Ken Mills: That’s right.

Kenton Williston: So, Justin, this all brings a question to mind for me about what these communities should be looking for in a partner. We’re hearing a lot of things that are pretty interesting and I think unique in IntelliSite’s approach. And I’m wondering what criteria you think are needed in the technology partner to enable successful ventures into smart and safe spaces. And, maybe even more broadly, what if anything communities might want to think about in terms of not just their immediate partner, but the ecosystem that’s behind that partner.

Justin Christiansen: I think there’s a lot to consider. The technology piece we’re focused on, because we’re a technology company. But really when we look at what it takes to provide these types of outcomes to end customers, it really is a large group of partners. So I think the IntelliSite team’s been an amazing ISV partner for Intel. We have channel partners. There’s a lot of systems integrators to deploy this equipment, they tend to be hyper-regionalized. So engaging with the person who’s deploying that video equipment in your favorite city or town in your favorite state, or, again, around the world in different countries, also with partners who can help us co-market, co-sell.

So I know Ken came from Dell. I think Dell’s been an amazing partner, in not only building the technology that we’ve talked about that can provide great outcomes for our customers, but also in co-marketing, co-selling. So it really does take a community of technology vendors, installers. And, at the end of the day, we all are focused on solving the end-customers’ business challenges. And, again, we’ve talked about some of those in smart spaces, but it really scales across all businesses, and the outcomes that technology can drive things to AI is quite vast.

Kenton Williston: Yeah, absolutely. So, we’re getting near to the end of our time together. So I want to give some open-form time to each of you. Because there’s, like you said Justin, so many different things to consider here. And just an open-ended question—and, Ken, I’ll give you the first go at this—if there’s anything else we haven’t covered yet that you think is really critical for the communities you’re serving to consider as we go into 2022?

Ken Mills: This is a great question. And one we can spend a lot of time on by itself. So I’ll try to keep it concise. I think the biggest thing that communities of all sizes—whether you’re a major metropolitan, NFL-type city, or you’re a small, five-thousand citizen, with just a couple schools community, it doesn’t matter—it’s important for all communities of all sizes to really look at the whole solution, and really make sure that they’re not getting locked into proprietary, niche-point solutions that are very limited in their scope and ability to really impact change or bring value to the community. And to really look at solutions that are open and flexible, and allow for dynamic innovation and change that is necessary over a life cycle of any technology project.

And what I’ve seen over and over again in my 20-plus years working with communities at levels is that often the easy button can be the most costly. And it’s important to really explore and evaluate the options that are out there to ensure that you’re getting the whole solution and really what you need to solve, or bring as much value to the community as possible.

Kenton Williston: Yeah, absolutely. Justin, anything you’d like to add to that? And one of the things that I’m thinking about here as a possible consideration for 2022 is just the idea of sustainability. I think that has really risen to the top of many communities’ concerns. So, I’m wondering how sustainability fits into this larger picture of having smarter, safer, connected communities.

Justin Christiansen: Yeah. So, you hit on a few interesting points, Kenton. Just to build on what Ken said, from a technology standpoint, the scalability of the technology you’re deploying is incredibly important, right? What we often find is a customer wants to deploy something at a relatively small scale to see if it works. And if it does work, they quickly want to scale it to something much larger. And I would also add the ability to be flexible in terms of what your technology can provide for you over time. We don’t know what applications or use cases a customer may want to have a year or two down the road, but we want to provide technology that’s capable of serving them after they’ve purchased it. And we talked earlier about smart spaces and the impact that COVID has had with robotics. If you had told me two years ago that I would actually desire to be in a restaurant that had relatively few people and was being served by a robot, I would’ve thought you were crazy, right? That’s actually a somewhat desirable state today.

So I think that flexibility to be able to support new use cases, especially as we see such a rapid transition in how we interact in our daily lives. You mentioned sustainability. Sustainability and ethics, the ethical use of AI, I would say, are two topics that come up in almost every discussion we have now. And there are areas where we’re focused on how can we enable better outcomes from a sustainability perspective? And how can we also do that to the extent that we can, ensuring that there’s an ethical use of that AI technology when it’s deployed?

Kenton Williston: I know this is something we’ve talked about in some of our prior podcasts looking at video technology. If you’re doing facial recognition, how do you make sure that data is secure and you’re not violating people’s privacy? Or, that you don’t have technology that’s racially biased, or any other biases like that? And that’s a very important set of considerations. Again, it’s looping back to what we talked about at the beginning of this podcast—when everything’s said and done, you’re really trying to make human lives better. So you want to make sure the technology ultimately serves that goal. With that, then I’ll just say, Ken, thank you so much for your time today. We really appreciate you joining us.

Ken Mills: Yes, sir. Thank you guys.

Kenton Williston: And, Justin, I’d like to thank you as well. Really appreciate your time.

Justin Christiansen: Thank you, Kenton.

Kenton Williston: And thanks to our listeners for joining us. To keep up with the latest from IntelliSite, follow them on Twitter and LinkedIn at IntelliSiteIoT.

If you enjoyed listening, please support us by subscribing and rating us on your favorite podcast app. This has been the IoT Chat. We’ll be back next time with more ideas from industry leaders at the forefront of IoT design.

The preceding transcript is provided to ensure accessibility and is intended to accurately capture an informal conversation. The transcript may contain improper uses of trademarked terms and as such should not be used for any other purposes. For more information, please see the Intel® trademark information.

 

This transcript has been edited by Erin Noble, proofreader.

AI Innovations Are Winners for Sporting Venues

Attending a major tournament is a bucket list event for any tennis enthusiast. There’s nothing like sharing the excitement with other fans, watching player tactics, and feeling the competitive buzz. And unlike watching on television, you can make quick decisions on which players to follow and games to watch, and especially, where to buy the best souvenirs.

But with more than a half million spectators, more than 100 competitors, and 30-plus courts, there are many operational challenges in running a safe and smooth tournament. Crowd management—with or without social distancing—is a huge one. This means organizers must have accurate and real-time people counts across a large venue to ensure an enjoyable and safe experience for fans, players, press, and others.

“Facility managers need to proactively prevent having too many people in any single area,” says Tom Urquhart, SVP Global Solutions at IOTech Systems, a provider of secure software edge platforms for the IIoT. “This includes the stands, concession areas, restrooms, and even the number of people on buses arriving at the venue.”

When venue organizers can see where people sit, which exits are used, and where crowds gather, they can redirect and respond quickly to emergency situations. Additionally, by analyzing historical data, they can reconfigure seating, concessionaires, and entry points to reduce overcrowding and prevent incidents. IoT edge platforms make this information upfront and actionable.

Real-Time Information Supports Real-Time Decisions

To achieve these requirements, IOTech, in partnership with PMY Group—which designs, implements, and manages technology solutions for venues and events—deployed an edge-to-cloud platform to provide real-time visibility across the facilities.

Together, they worked with tournament operations teams to plan a facility-wide solution. And in two months, the system was installed, tested, and operational—including hardware, software, cameras, and sensors.

Using edge AI and cloud-based technologies, the data flowed to a dashboard accessible on organizers’ smartphones, tablets, or computers. By viewing a schematic model of the venue, operators could quickly see counts in each area and identify hotspots. To gain more information about each section, they could drill down to get all relevant data for a location—both current and predicted.

#Venues of all types can have the #data needed to not only enhance attendee experiences but even lower energy use and costs. @IOTechSystems via @insightdottech

The technology allowed both managers and spectators to make decisions based on this information. Organizers could determine which matches would be most popular and adjust court assignments based on predicted crowds. And thanks to large video screens, attendees could view the location of open seats in other courts.

And the results were a big win.

“We heard from event leadership that they were blown away by the level of data and accuracy in terms of where people were in the facility at any one time,” says Urquhart. “There was much more insight into what was going on at the venue. They saw what is possible from a baseline technology viewpoint and immediately began thinking about what else they could do with this kind of information in the future.”

Edge Computing Platforms Demand High Performance

The solution combines the IOTech Edge Xpert industrial-grade computing platform and PMY Group Smart Operating Platform software.

The platform depends on powerful, reliable hardware to support these applications. That’s one reason IOTech leverages the latest Intel® technologies around GPU and CPU architectures, and software like the Intel® OpenVINO toolkit to facilitate inferencing.

The company has also been working closely with Intel on the design requirements of one of their new products, called Edge Builder, which they will start to work on with PMY. The goal is to make the whole setup process as efficient as possible, to remotely move the software down to the physical hardware at the venue.

“Intel helped us connect with PMY, and we have been working with their team on a number of projects throughout the last year,” says Urquhart. “Going forward, our joint road map has many exciting projects, and we expect to be working with PMY for many years to come.”

AI Innovations Power New Use Cases

As the IoT edge platform becomes more robust, the possibilities are almost endless. Venues of all types can have the data needed to not just enhance attendee experiences but even lower energy use and costs. And event operators can monetize their data through advertising on digital screens—customizing content based on demographics, activities, weather, and much more.

“Growing the data set means we can bring a much richer experience for both the facility operators and users,” says Urquhart. “And we are expanding the solution beyond just stadiums to other use cases like building automation for better environmental control and secure access, for example.”

“I think the sky is the limit now that we’ve proved out the baseline technology. We’re going to be doing a lot of brainstorming as we take this next level to other applications and new markets,” Urquhart says.

 

This article was edited by Georganne Benesch, Associate Content Director for insight.tech.

This article was originally published on February 16, 2022.