The Future of Smart Buildings? Net Zero

As more cities pursue climate action plans, a new vision has captured the attention of building owners, designers, and operators: a smart building that uses energy so efficiently, its carbon footprint isn’t just reduced but eliminated. Toward that end, companies like Amazon and Google have already set aggressive Net Zero goals—hoping to transform their buildings into eco-friendly machines that don’t consume more electricity than they produce.

Businesses are equally focused on elevating the environment inside their buildings because they know the productivity of their biggest asset, the workforce, depends on it. After all, employees do their best work when they’re comfortable, safe and in a healthy environment.

Until recently, an energy-efficient building that also meets the needs of its occupants would have been very difficult to achieve. Just think of the costs associated with electrifying and cooling an entire floor of a building for a few essential employees working on a weekend.

But new smart-building technologies are changing the game. Now companies don’t have to choose between a reduced carbon footprint, a safe and comfortable occupant experience, or cost savings. Advanced data analytics and AI-powered insights help them achieve all three.

Net Zero Optimization Depends on Cohesive Data

What is the key to transforming a building from a passive structure to invaluable business tool? A “single pane of glass” that can view, command, and optimize performance across multiple building systems. That is, operators need real-time insights into what’s being consumed, at what rate, and why—so malfunctions and inefficiencies can be identified and addressed right away.

This idea applies not only to commodities like energy but also to space allocation and human capital.

“This holistic picture is really the first step toward optimizing your facilities—whether for Net Zero, employee productivity, or both,” says Terrill Laughton, Vice President of Enterprise Optimization and Connected Offerings at smart-building solutions provider Johnson Controls, Inc. (JCI).

The information this kind of optimization requires is difficult to get in a usable fashion in the traditional building environment, where a mix of systems from different eras and providers typically operate independently. Maintaining these isolated systems and mining their data is expensive and difficult. And after all that, gaining tangible and actionable insights would take a highly trained and experienced building professional.

But when systems are connected and data is translated to a common schema, the information can be shared and combined for new and powerful use cases. For example, an operations manager could quickly understand what the organization’s total energy consumption is—across one building, 100, or even 1,000. JCI’s OpenBlue Platform and suite of applications were designed with these goals in mind.

“HVAC, for example, can be responsible for up to 50% of the overall energy consumption in the commercial-building space,” explains Laughton.

A new vision has captured the attention of building owners, designers, and operators: a #SmartBuilding that uses energy so efficiently, its #CarbonFootprint isn’t just reduced, but eliminated. @johnsoncontrols via @insightdottech

The OpenBlue Enterprise Manager can help customers with green and cost-savings goals to ensure the equipment in that central plant is operating dependably, so they can take pre-emptive action if necessary. “Just running your central plant in a smarter way can cut HVAC energy consumption by 15% to 25% and decrease the overall building load up to 10%,” Laughton adds.

The Enterprise Manager works in part by aggregating data from multiple OpenBlue apps such as Location Manager and Companion. Both apps have real-time occupancy capability, which lets building managers power only the spaces being used. And Companion helps occupants, too, by providing productivity-enhancing tools like wayfinding and personalized temperature control.

Fully Integrated, AI-Based Security Systems

This kind of integration is the real value behind OpenBlue. For example, safety and security systems were once completely separate from other systems. But with the advent of IoT technology and AI, they’ve become another important source of building information, while serving their core purpose.

“With all these offerings on a common platform, security information can be used to improve building operations. And likewise, data from other systems can be leveraged to achieve higher levels of security,” says Sara Gardner, Global Head of Strategy and Marketing, Security, at JCI.

Modern security systems look very different from earlier generations, and of course they do much more. Touchless access controls that use mobile credentials or AI-based video analytics make for a hygienic and frictionless authentication process with multiple benefits.

People can move more quickly and safely through a facility when they don’t have to fumble around with access cards or long lines. And information about their presence can be shared with other smart systems—lighting or elevators, for instance—for improved space utilization, better management of energy, and to improve the occupant experience.

Smart Data Analytics Drive Innovation

OpenBlue is delivered through an as-a-service model with standardized capabilities and applications. Clients can build and develop solutions on the platform to meet their needs, and applications are highly configurable to meet a wide range of uses cases. Customers can combine the appropriate smart-building applications and functionality that will meet their immediate needs and add capability over time as their organizations continue to evolve.

Scaling is easy because all systems are connected via a multilayer, infrastructure-agnostic platform. OpenBlue Bridge, the ingestion layer, collects data from different building systems—thanks in large part to Intel® processor-based platforms at the edge.

Once collected, OpenBlue Cloud applies the specific schema where the relationships between different pieces information are understood. For example, there’s a schema to process how a building automation data point might fit with a security data point, explains Laughton: “And that’s when you can leverage the data and start to do something really innovative with it, Net Zero or otherwise.”

But a single Net Zero building or company is just the beginning. “Just imagine if every building across the United States and the world did the same,” says Laughton. “That would be a huge reduction in carbon emissions, and cost-effective, too, since companies wouldn’t have to rely on electricity purchased from the grid.”

The smart office building, warehouse, or hospital of the future is one that improves business while improving lives. By harnessing the power of AI and advanced data analytics, JCI is building it today.

Paper Mills Press On With AI Visual Inspection

Over hundreds of years, paper products have been commoditized so much that the industry average profit margin is in the low- to mid-single digits. Meanwhile, findings show that the estimated average cost of unplanned downtime for a paper or pulp plant is $220,000 per day.

In short, paper suppliers can’t afford faulty equipment. That’s why they’re using AI-enabled visual inspection to guard against complications like wet line—excess water from the pulpy mixture that’s mechanically pressed to make paper.

If it makes its way too deep into the press machinery, wet line can destroy drying paper and stop the production process itself.

The phenomenon occurs organically during the process of flattening watery pulp, so it can’t be eliminated entirely. Still, it needs to be monitored, which itself can be a challenge. Today, press operators climb ladders on the side of machinery because the water line is often visible only from acute angles. If it has progressed too far, the equipment must be readjusted from a nearby headbox, and possibly even cleaned.

AI-based computer vision can automate this process to keep presses running and wet line in check. And this isn’t some futuristic proof of concept, either. byteLAKE, a company specializing in AI-enabled visual inspection and big data analytics for manufacturing, has already deployed its Cognitive Services platform at a major European paper mill.

Paper suppliers can’t afford faulty equipment. That’s why they’re using #AI-enabled visual inspection to guard against complications like wet line. @byteLAKEglobal via @insightdottech

AI on Paper

byteLAKE’s Cognitive Services is an end-to-end IT/OT platform that performs tasks traditionally done by humans. But while further automating an automated process sounds easy, there were still specific requirements byteLAKE engineers had to meet before implementing their solution at the mill. These included plant operators’ desire to keep:

  • Press machinery unaffected
  • Production processes intact
  • Deployment costs to a minimum

These requirements immediately ruled out monitoring solutions based on conventional moisture sensors or flow meters. At the same time, they also demanded sophisticated analytics and edge processing capabilities that could keep operators from being overwhelmed by sensor data.

byteLAKE proceeded to install high-resolution cameras with an unimpeded view of wet line on the plant’s presses. Images are transmitted to an edge computing platform where a version of the YOLO real-time object recognition algorithm combined with byteLAKE’s AI models determine whether wet line has crept beyond predefined boundaries.

From there, images and metadata are sent to plant management software that can alert operators or trigger commands to adjust pulp composition or increase the power of drying fans (Figure 1).

byteLAKE computing platform connects to video camera and runs AI at the edge.
Figure 1. Automated visual inspection uses AI to optimize operations at a paper manufacturing plant. (Source: byteLAKE)

Data then continues into IT systems where the Cognitive Services platform applies analytics so that operators can tune their press machinery and plant processes to operate more efficiently in the future.

From Months to Automatic

Besides the complex system integration challenges of new and existing IT and OT platforms, byteLAKE spent considerable energy constructing a data set of paper press and wet-line images to train their object detection model. Then they had to optimize the on-premises hardware so it could effectively execute their new algorithm. This started by evaluating an Intel® Core i5-based edge computing solution.

Intel Core processors are more cost-effective and power-efficient than the alternative. And in the case of wet line analytics, they can deliver a competitive 12 frames per second (FPS) of image processing thanks to a 10x boost in neural network execution provided by the Intel® OpenVINO Toolkit, a development suite that tweaks AI algorithms to run as optimally as possible on Intel® CPUs, FPGAs, GPUs, and accelerators.

“You can do it manually and spend many months, or you can leverage YOLO and OpenVINO, which can go through your product, inspect the architecture, and eventually give you a 10x speed up,” says Marcin Rojek, byteLAKE’s co-founder.

AI-Assisted Papermaking: The Writing Is on the Wall

The byteLAKE Wet Line Detector (part of byteLAKE’s Cognitive Services) deployment is one of the first cases of AI adoption in the paper industry, but it certainly won’t be the last. Because wet-line monitoring requires only a couple of images each minute, the system’s 12 FPS leaves more than enough headroom to perform wet line detection for an entire plant on the same edge hardware.

And byteLAKE is also developing solutions that go beyond the paper mill, deploying AI in use cases like computational fluid dynamics (CFD). According to Rojek, the company is currently using deep learning to reduce CFD simulations for liquids from more than four hours to less than 10 minutes, all while retaining better than 93% accuracy. The product—CFD Suite—is available for the chemical industry.

“In many cases, the technology has matured enough that nowadays it is relatively easy to leverage various building blocks of AI for the solution we need,” he asserts.

We’ll see only more of smart-factory, AI-based solutions like these, especially where margins are paper thin.

The Versatility of Intelligent Cameras and Video Analytics

Without a doubt, video systems combined with real-time analytics are essential to security and safety—but you would be surprised at the diverse use cases where this technology can be applied. This versatility not only creates new opportunities for organizations of all types but also for the systems integrators that serve them.

Today’s video market demands easy-to-deploy, end-to-end solutions that include both hardware and software elements so integrators don’t have to piece together systems from different manufacturers and vendors. And their customers want to do more with their video security than just catching bad actors—using newer technology and analytics to take a proactive approach to security.

These systems are not always easy to design and deploy. Axis Communications, a global leader in network video systems, is addressing that problem.

The AXIS Camera Station S-Series Network Video Recorders provide all the components needed for security and other applications. This includes cameras, recording hardware, and software, with the option to add components such as speakers, access control, or analytics. From sporting venues to campus security to retail, the solution can be tailored for unique applications and customer needs—providing the opportunity for SIs to scale into new markets.

A New Vision for School Safety

Tired of sifting through lengthy videos and dealing with false alarms, leaders at Washington Community High School in Illinois were looking for a modern video security system to upgrade their analog cameras. Together with their SI, they chose to go with Axis due to the ability to design a complete solution from a single manufacturer.

The SI leveraged the AXIS Site Designer tool to lay out what cameras would be installed in each specific location to provide maximum coverage. The school installed 72 cameras along with AXIS Camera Station video management software.

“Now, instead of having to search through hours of analog video, security officers can find what they need quickly and export it easily if needed,” says Mershon. “They can also receive alerts and videos on mobile devices, allowing quick evaluation of threats and avoidance of false alarms.”

Customers want to do more with their #video #security than just catching bad actors—using newer #technology and analytics to take a proactive approach to security. @AxisIPVideo via @insightdottech

New Use Cases Bring New Opportunities to Security SIs

Ice Hockey is the lifeblood of Canada, and the Ontario Hockey League wanted to try something new. The organization needed a video monitoring solution that was consistent, reliable, and could deliver high-quality video able to track pucks traveling at more than 145 kilometers an hour.

The solution includes two cameras directed toward the nets and one on the game clock at each of the league’s 20 rink locations. This configuration provides high-quality video streams in real time for television broadcasting and for officials to review penalties and goals (Video 1).

Video 1. The Ontario Hockey League uses high-definition video to review penalties and goals, and broadcast live video and replays on television. (Source: Axis Communications)

“Many people don’t realize that broadcast video and video surveillance are two very different things,” says Mitch Mershon, Business Development Manager for End-to-End Solutions at Axis. “For us to be able to bridge that gap for them was an amazing thing.”

Powered by Intel® processor-based recorders, the Axis solution can provide additional possibilities for retail settings. Add-on components such as audio speakers and integrated video management software can provide content from background music to automated warnings and notifications.

For example, the video management software can detect if customers are piling up in a grocery line, and play an announcement in the employee break room to send someone to assist.

“Rather than having a cashier stop what they’re doing and ask a manager to make an announcement, it can all be done automatically,” says Mershon.

Another useful retail application is the video management software’s ability to trigger an alert when someone’s behavior is unusual. A speaker can deploy a message saying “You are being surveilled,” which in turn will deter them from carrying out bad intentions, such as theft or trespassing.” These alerts are termed proactive security and can supplement security guards’ efforts to decrease response time.

“A car dealership was having trouble with break-ins, so they installed a video analytics systems with an audio speaker to play alerts when suspect behavior was detected,” Mershon says. “As a result, 93% of potential intruders were deterred without calling law enforcement.”

Cybersecurity and Privacy

When it comes to capturing video surveillance, personal privacy and cybersecurity concerns are top of mind for many SIs and their customers. The solution incorporates current encryption standards for data transfer and storage, as well as cybersecurity strategies like not requiring port forwarding and static IPs.

“We also work with certificates to ensure that whenever a camera talks to our server, both of them have a shared piece of information—so someone can’t unplug a camera, plug in their laptop, and be able to access the entire network,” Mershon says.

Personal privacy can be maintained through the use of masking, a setting that blocks part of the image. Security personnel can pre-set masks, apply them in real time, or after the recording is made but before it’s exported from AXIS Camera Station.

End-to-End Systems Streamline Deployments

Systems integrators value the solution’s ease of deployment and support for applications beyond security, lowering costs and creating new opportunities for end customers.

“At the core, everybody is looking for security, but then we layer on all these other opportunities that contribute to business efficiency and proactive security,” says Mershon. “There’s a lot of room for creative applications.”

Axis also arms SIs with the tools they need to streamline the design and quotation stage of projects. And they can reduce labor time and costs when setting up the video management system by automatically uploading a project right from Axis Site Designer into AXIS Camera Station.

A simple licensing model and two-tiered distribution approach is also advantageous to SIs because they do not need to procure hardware or software until an end customer has paid for it.

“Integrators don’t have to sit on stock at a warehouse, and often they negotiate an agreement with their distributor to make sure everything stays up-to-date with the latest firmware and IPs,” Mershon says. “All this makes it very easy for an SI to walk in, make a purchase, and be out the door and on their way.”

The impressive range of use cases for today’s video technologies means this field is wide open for new opportunities and creative applications. So what’s your vision for the future?

HPEC + AI = Predictive Maintenance ROI

When you peel away all the sophistication and innovation of technologies like predictive maintenance, they are, at their core, cost-saving solutions.

Peter Darveau is Head of Engineering and Principal Engineer at Hexagon Technology Inc., a technical services firm that specializes in automation systems, safety-instrumented devices, and machine learning. He acknowledges that the first goal of predictive maintenance is to analyze the behavioral patterns of a machine over time to prevent system failures and optimize performance. The ultimate goal, of course, is more agile, competitive, and profitable operations.

“If you’ve got a $5 million piece of factory equipment, you finance 80% of it at a 5% interest rate, which is pretty typical, and you just extend its life for one year, you’re saving $200,000,” Darveau says. “If you can just get one more year, it’s an attractive payback. So right away, there’s interest.”

But interest doesn’t guarantee simplicity.

Feature Engineering: A Path to Operational AI

Automating the process of predicting equipment failures and recommending actions to prevent them requires AI. And for AI to be effective, humans must lend a hand.

Engineers have to generate data sets comprising equipment’s “normal” and “fault” conditions for AI algorithms to identify anomalies. What’s normal and what’s faulty are extracted from analog signals like vibration or acoustics and then classified accordingly through a process called feature engineering. But it’s a manual procedure that requires extensive signal processing expertise to appropriately extract, evaluate, and classify signal data from the target machine.

The complexity of feature engineering explains why it’s taken so long for predictive maintenance strategies to be deployed.

Hexagon Technology helps companies implement asset monitoring solutions for large-scale industrial equipment like steel milling machines that handle multi-ton fragments of raw materials. Whenever a mass of materials moves, it vibrates the machine’s foundations, which were designed to accommodate enormous weight. Hexagon develops what’s called prognostics and availability monitoring (P&AM) systems that continuously analyze these structures so that integrity is maintained.

“In the past when we’ve done vibration analysis, we had to do so much just to have a useful data set,” Darveau explains. “If you wanted to analyze a noisy signal, you had to go through tons and tons of calculations. You had to do Fast Fourier Transforms (FFTs)—a method for transforming a function of time into a function of frequency—to filter out the noise, and then then you had to condition the data.”

Today, Hexagon can bypass much of the feature engineering process thanks to advances in high-performance embedded computing (HPEC). Specifically, the company is using Intel® Xeon® processors, Intel® Movidius VPUs, and an inference engine built on the Intel® OpenVINO Toolkit to find patterns in streaming waveforms instead of extracting features from complex analog signals.

“Because of the higher performance now on Intel® processors, we can do inferencing of video,” Darveau says. “The fact that it’s a video and I can chop up my frame means that instead of taking raw data and looking at very specific areas, I can easily look at patterns that happen over time and match it up with data from a simulation. Then we let the inference engine do its work.”

By combining development platforms like DevCloud and OpenVINO-optimized deployment hardware, #automation companies can finally implement usable #PredictiveMaintenance systems. via @insightdottech

The Living Design of AI and ML Models

Hexagon’s P&AM solution is technically a functioning prototype that has been operationally deployed for two years at a plant like the one described above. During that time, Hexagon has more than doubled its video analytics performance from 30 to 60 FPS (or what the human eye can see) to 140 FPS. The system’s ability to accurately detect observed features has also increased to 95%.

Still, Darveau is aware that his system must achieve accuracies on the order of 99.999% before it’s ready for widespread commercial deployment.

“Because this environment is dynamic, and this is true for any AI or machine learning-type model, the model is a living thing. It is for us to make improvements to the system,” he explains. “And it’s going to be the last mile that’s going to be the toughest. We anticipate updating this model several times until we feel that it’s efficient enough.”

Knowing that their P&AM solution would be a living design from the start made the selection of OpenVINO an easy one. Not only do its optimization features maximize execution performance on today’s processors, but its portability across CPUs, GPUs, FPGAs, and accelerators also means it can scale workloads from legacy systems to next-generation processors. This lets engineers update models independently of the processors, knowing there will always be a hardware platform to efficiently execute their AI.

AI, Predictive Maintenance, and the Road to ROI

One final requirement of predictive maintenance is that someone shepherd the living AI models. Ideally this is a member of the technical staff where the system is installed. But it’s unlikely that such a role exists at many companies today.

That makes tools such as Intel® DevCloud—which provides a sandbox for testing models before deploying them on operational hardware—a valuable platform. For organizations looking to support P&AM systems on their own, DevCloud can be used to experiment with analytics and diagnostics capabilities or learn how AI will integrate with legacy devices.

By combining development platforms like DevCloud and OpenVINO-optimized deployment hardware, automation companies can finally implement usable predictive maintenance systems. And in doing so, they can smooth the road to ROI.

Moving the Needle to Industry 4.0 with 5G and the Edge

There’s no way around it: If you’re a manufacturer chugging along with Industry 3.0 today, you need to be moving your shop floor toward Industry 4.0 for tomorrow—with plans for 5.0 if you want to be around next week.

But how does the manufacturing process need to change to get there? 5G may just be the answer.

We talk to Philippe Ravix, Global Digital Manufacturing Solution Architect at Capgemini, a global leader in digital transformation, technology, and engineering, about having a road map for the future, the role of edge computing in manufacturing, and how 5G can lead to Industry 5.0.

Certainly a lot has changed in the manufacturing space recently. Can you talk about how it has evolved since the advent of the Internet of Things?

We are now in the digital transformation era—also called Industry 4.0, factory of the future, intelligent industry, or smart factory. Those terms express not only that we need a data-oriented approach, but we need a collaboration with the foundation of manufacturing—what we call the Golden Triangle—which is based on three main systems: the PLM, the MES, and the ERP.

The advent of IoT is something that will have an impact on the manufacturing process based on real-time data collection and analytics; and it will complement existing systems that are more process oriented. So, it’s not: I will replace. It’s really: I will complement and collaborate with the existing systems that manage the shop floor and the manufacturer.

IoT is clearly one of the driving forces behind the Industry 4.0 movement. I think it will first enable immense automation. That is one of the key points—first, leverage data collection from the shop floor to the cloud; and at the end, leverage advanced analytics. Why? To optimize workflow and processes inside the manufacturer. This will be the next step after a lean strategy; it will be a kind of lean software, to have another step of process optimization inside the company and inside the shop floor.

What are some of the challenges that manufacturers face as they try to grow and scale their IoT initiatives?

Automation, flexibility, and sustainability are the three main challenges that we see.

The first one, automation, is clearly one of the key topics that we see in the market. How can we integrate technologies to automate manufacturing processes?

The next one is flexibility. Today it takes a long time—if you manufacture a product in the line—to change that line in order to manufacture another product.

And last, sustainability: to make manufacturing cost-effective by improving the efficiency of the equipment and the processes; to minimize energy consumption; to decrease manufacturing time and lead time; to reduce waste, and to use less material.

The advent of 5G is opening a lot of really exciting new possibilities. How will 5G address where manufacturing is going next?

I would say there are two game changers in IoT today that will create the IoT of the future. The first one, 5G, is definitely one of the game changers. The edge is the other one. When I started in IoT 10 years ago, it was a device sending data to the cloud for analytics and for human interaction. It was more cloud-to-human. This is a south-north connection from device to cloud, without a lot of data.

Now, with the amount of data and the number of devices deployed, at the end of the day you have a lot of data, and you’re not able to send everything to the cloud. So the edge is really important, and the key part in addressing this manufacturing challenge is having this intermediate platform to collect data, to standardize data, to compute data. Then after you can say: I can send to the cloud, I can send to my colleague, to another edge, and so on and so on. So edge is clearly a game changer for IoT in manufacturing.

5G is also, for sure, a key technology for IoT. Why? What is interesting with 5G is that you will be able to avoid having wire connectivity on the shop floor. So along with the capabilities of the edge—meaning speed and near-zero latency—5G will eliminate wire connectivity and will offer that degree of flexibility that I mentioned as being a key challenge in the process, by adding mobility for everything.

Can you talk a little bit more about the edge architectures you see emerging out of this new paradigm?

The edge platform is really this intermediate platform that you can have at the device level, at the machine level, at the plant level. And each level of edge will have features or capabilities for compute and storage. And so this is the value of the edge.

The edge is a platform, so it’s 100% a cloud-style architecture. We can see the edge as part of the cloud—so we do not disconnect the edge from the cloud, or the cloud from edge. In fact, the edge is seen as part of the cloud—the same architecture style. And it’s why the big cloud providers—like Microsoft Azure, AWS, or Google—have now on the market their own edge platforms. This is why there was an interest in the cloud from the bigger players first.

And with this kind of architecture, there is also an interesting point that today the main connectivity is from device to cloud—so south to north. With edge, you can create a collaboration from east to west. You can create east-to-west connectivity—meaning, the edge will be able to discuss and to manage integration and to exchange data from one edge to another one from other specific use cases.

So everything at the shop-floor level will optimize the process—meaning that you don’t need to send data to the cloud to optimize the process; you can optimize the process at the plant level. So that’s why you have this collaboration between edge and all this data.

How does Capgemini help manufacturers implement such a system and deal with all the complexities?

Manufacturing is a complex system, for sure, with a lot of complexity from everything—from the connectivity, from the data management, from the data, from the use case, and from the architecture point of view. So we support a lot of clients in digital manufacturing transformation. We have a dedicated approach for this, starting with both a business vision and business use cases, and an architecture view.

We always start with business and architecture, because in digital transformation there is the question: What is the right use case? What is the value of this use case? What is the road map? But also there is the digital question—meaning the technology. It would be a mistake to separate business from technology today. So we start to support the client with the business and IT roadmap—I would say that is the first phase.

There is a lot of experimentation with each client, and the key problem now is not to identify or to validate the business value; it’s how to scale. This is the challenge today. We know that in IoT we can develop a lot of proofs of concept, but the value of IoT is not in the proof of concept, which costs money. It’s in the global deployment.

After the business and IT roadmap, we directly go to a scaling program with the client—meaning architecture. And one of the key points is really to have a platform strategy. Do you have the connectivity platform? Which one? Do you have the data platform? Which one? Do you have the analytics platform? Which one? How do you manage the global integration between all the systems?

Everything is based on the platform; everything is based on the cloud-style architecture. So we define this detailed architecture. From the key use cases that have been validated during phase one—or else during the previous work done by the client—we select no more than five use cases for development and global deployment.

Where does security factor into your thinking about this?

Security can be seen globally on the shop floor. We have this platform, and most of the time on the shop floor we have a private network, and we manage the private network in 5G. And we manage also all the security between the system and the machine with encryption.

And after, on the cloud, we use the security coming from the cloud provider. When we work with Azure and AWS—meaning that we know that it’s a secure system—we don’t need to add anything. The point is how to manage security between the plant and the cloud.

So, we can manage the security from plant to cloud, and inside the plant we manage the security with the network. It’s something that we address by design in the solution that we have.

How do you work with Intel® to achieve success? And how does that relationship support everything else you’re doing?

Intel® has been a strategic partner for Capgemini for many years now. At Capgemini we have a global alliance team at the group level, and Intel is part of this global alliance. With Intel we know that we will always have access to the best technologies, to expertise, and to innovation. It’s really a technology partner that is very powerful, and it’s very powerful in the digital transformation world.

One of the key points also with Intel is that they have a huge ecosystem of partners; that is powerful for us. And we can access all these partners from Intel. When we have a question, Intel will be able to put in front of Capgemini the right partners with the right technology, and we can have direct access to the right technology for all projects. It’s also a way to accelerate and to secure our delivery. And, the last point, and very important for us, is that we have a joint collaboration in solution development with Intel.

What advice would you give manufacturers considering whether to deploy 5G and edge computing?

You need to have a clear vision of the market—meaning that if you don’t move, your competitors will move, and you will have lost market share. But be sure that your client has a very good understanding of the technology where it is today, where they want to go tomorrow, and why.

Second point: have the right architecture—meaning the right platform, integrating with the edge—because everything will move very quickly. So have a clear view of the architecture and use cloud-style architecture. Because if not, you will have silos.

And last, also be sure that the client has a foundation in Industry 3.0—a lot of clients do not have an MES, for example. So they have an ERP, but they do not have an MES. A client that does not have an MES should not collect data today—it’s too early. So have the right foundation—what we call the Golden Triangle.

But if you want success, you need to have a clear vision of where you want to be in terms of business tomorrow.

To learn more about the role of 5G and edge in smart manufacturing, listen to our podcast The 5G Factory of the Future with Capgemini.

AI and CV Detect Production Problems Before They Happen

Digital transformation is becoming more important than ever as manufacturers continue to feel the ripple effects of COVID-19. Before the pandemic, companies were already struggling to find experienced operators to manage the shop floor. With social distancing and a limit on the number of workers that can be in the room at one time, manual operations have become that much more difficult.

Factories must start implementing advanced technologies such as artificial intelligence (AI) to automate operations and be able to access equipment remotely.

“One of the biggest challenges for manufacturers today is being able to accurately detect potential failures and issues in equipment or products,” says Saito Akihiro, Group Manager of the Business Development Group, Business Development Department and Technology Division at Okaya Electronics, a manufacturing solutions distributor.

It’s imperative that manufacturers regularly maintain machines to catch any unusual behaviors or patterns before they become a bigger issue. When unexpected equipment failure halts production, costs soar, and margins shrink.

The same is true when product defects go undetected. Materials and even finished goods may need to be scrapped or customers could receive flawed merchandise.

“Manufacturers need to ensure the entire production line is always operating efficiently,” says Akihiro. But the problem is that in many cases these tasks are still a manual process and require highly advanced and specialized skills from experienced technicians.

Typically, technicians can detect equipment anomalies or product defects just by looking at or listening to machinery. But this comes with two potential drawbacks.

First, since the process relies on human judgment, the results vary depending on the individual.

Second, the number of technicians with this highly specialized knowledge is diminishing. A recent report from the consulting firm Deloitte found that there will be an estimated 2.1 million open manufacturing positions to fill by 2030 (Figure 1).

Deloitte graph analyzing the decline of manufacturing talent from 2015 to 2020.
Figure 1. It is becoming increasingly more difficult for manufacturers to find the right skills and talent. (Source: Deloitte)

The impact of not being able to fill jobs will result in a manufacturer’s inability to implement new technology, respond to the changing market, maintain production levels, and improve growth.

AI-Enabled Product Quality Inspection

AI and machine learning (ML) are key tools in addressing these challenges, by automatically detecting problematic equipment, and more accurately predicting necessary maintenance or replacement. As a result, operators can focus on more valuable tasks.

For example, a major Japanese auto parts manufacturer was finding that its manual product inspection process was resulting in inconsistent and inaccurate results. It decided to leverage an AI-driven anomaly detection solution to automate the process. As a result, the company was able to not only improve quality but also increase inspection to as many as six products simultaneously, according to Akihiro.

The manufacturer achieved these results by deploying the Impulse solution, a platform based on software from AI company Brains Technology—named a Gartner Cool Vendor for performance analysis and AIOps.

End-to-end solutions like Impulse will be game changing for companies that do not have the in-house skills to develop and deploy the latest #technologies required for agile #manufacturing practices. via @insightdottech

Okaya packages Impulse into a kit ready for proof of concept (PoC) or production environments, with an edge computer and the Brains Technology software. The kit features vision and imaging sensors for data collection, AI and ML models for pattern and anomaly detection, and Intel® processors for high performance and efficiency.

As an aggregator, Okaya makes it easier for SIs to provide IIoT solutions like AI-driven anomaly detection to their existing customers. And the company’s services—from logistics to integration to technical support—free up SIs to focus on winning new customers as well.

Making Sense of the Data

“Impulse can also automatically analyze and detect patterns from data without human intervention,” Akihiro explains. “This is especially important as the amount of data being made available from equipment and automated product inspection is becoming overwhelming to manage and extract insights.”

For instance, one Japanese-based company already had undergone some predictive maintenance efforts to collect and analyze equipment sensor data but was still using human operators to find and uncover valuable insights. This was becoming too complex to manage manually.

It wanted a platform that could quickly analyze time-series data and extract abnormalities based on that information.

The company conducted a proof of concept using equipment sensor data for two months to verify whether Impulse could uncover unusual behaviors before they became an issue. It found the solution was able to do this accurately and shorten the time it was taking to analyze the data previously. The company plans to use the Impulse solution for various other IoT data analyses in the future.

Democratizing AI for Anomaly Detection

Impulse includes an auto-modeling function that can create anomaly detection models using manufacturers’ still image and video data. This means that SIs and end users can easily create their own models by inputting digital data for analysis.

A global engineering and construction company that specializes in environment, energy, and social infrastructure technologies recently leveraged Impulse because of the platform’s ability to build AI models without programming knowledge. The company was not only able to detect abnormalities with Impulse, but take proactive measures such as a planned shutdown because of their findings.

End-to-end solutions like Impulse will be game changing for companies that do not have the in-house skills to develop and deploy the latest technologies required for agile manufacturing practices. AI, machine learning, and computer vision are the key for manufacturers to be more nimble, spot problems on the line, and resolve QA problems before they happen.

Industrial Predictive Maintenance Drives Factories Forward

Time-based monitoring (TBM) has long been the standard for inspecting and repairing manufacturing equipment. But as digital transformation in manufacturing continues to dominate factories, this traditional approach falls short. TBM overlooks early equipment anomalies, leading to unexpected downtime and costly malfunctions. Compounding the issue, TBM depends on highly skilled personnel—an increasingly scarce resource.

“As one of the key goals for digital transformation, business leaders are looking for ways to automate processes. To do so, it is necessary to detect any abnormalities in the factory machines on behalf of on-site workers,” says Daisuke Nishimura, Managing Director at Macnica, an IoT solution aggregator.

To remain competitive and avoid unexpected machine failures, manufacturers are shifting from TBM to industrial predictive maintenance. This shift relies on condition-based monitoring (CBM), an approach that uses sensor technology and AI to deliver smarter, more-proactive results.

A Case for Condition-Based Monitoring

One Japanese-based aerospace manufacturer realized the benefits of condition-based monitoring when it began automating its assembly lines. The manufacturer discovered its perforation process relied too heavily on highly experienced workers, making it difficult to automate. The company set out on a mission to solve this issue using sensors and AI.

“To prevent fuel leaks on aircrafts, there are strict requirements on the drilled-hole quality,” says Nishimura. “If the hole doesn’t meet these criteria, the fix would be costly and require a significant amount of rework.” This meant that any abnormality of the perforating machines must not go undetected.

The goal was to detect machine failures using vibration sensor data. The manufacturer turned to Macnica to help implement the sensors and collect condition data.

Condition-based monitoring is just one of the first steps to an autonomous #manufacturing future where innovations like edge #AI and #ComputerVision truly enable the #SmartFactory. @macnica_inc via @insightdottech

Macnica introduced the SENSPIDER Smart Sensor Gateway, an innovative solution that gathers sensor data at the edge and uploads it to the cloud. This was able to provide the advanced condition-based monitoring services the customer was looking for.

“SENSPIDER can help system integrators work with customers on a wide range of devices and problems,” says Nishimura. “We do this by flexibly supporting optimal sensor deployment and preprocessing for each use case within a single box.”

Now the team can detect equipment failures in real time, and is actively working on integrating more CBM features into their machines. These advancements will help the customer reduce its dependency on highly skilled workers and move closer to achieving full assembly line automation.

Condition-Based Monitoring Unlocks New Industrial Innovation

“The impact of CBM is significant in production line automation, as equipment malfunctions can go unnoticed and lead to a pile of defective products,” says Nishimura. CBM also leads to new opportunities for systems integrators (SIs) and machine builders.

SIs can transform their business model by launching new features and services that leverage CBM. For example, they can remotely monitor equipment conditions and perform optimal maintenance for their customers. And they can provide advanced services with automated, AI-based alarms that notify if there are signs of abnormalities.

Machine builders looking to offer as-a-service solutions can also benefit from these capabilities.

“The growing market demand for CBM has become a major factor that motivates machinery manufacturers to develop CBM as an added value,” Nishimura says. “It is difficult to expect dramatic improvements in individual machine performance. More and more manufacturers are starting to focus on improving services as a new differentiating factor.”

Industrial Predictive Maintenance in Action

Implementing CBM is not without its challenges. Manufacturers can struggle to move beyond proof-of-concept (PoC) to production if they lack a clear goal

“It is important to determine whether the system is worth building and valuable for users by considering the cost of maintenance, the cost impact of equipment failure, and the benefits of improved productivity and quality before proceeding with the project,” Nishimura says.

Success also depends on buy-in from upper management, who can allocate the appropriate budget and resources to get projects off the ground. “This kind of problem often occurs if the project is not aligned with the company’s strategic plans and policies, or if it is not recognized as a worthwhile initiative,” Nishimura explains.

Other technical challenges of condition-based monitoring include choosing the right sensors, managing hardware costs for mass production and operation, and acquiring effective training data.

With solutions like SENSPIDER, SIs can incorporate customized functions, integrate with cloud environments, and accelerate development and deployment. And by leveraging Intel® SoC technology, Macnica can provide a high-performance platform at a lower cost than other solutions.

The company believes a five-phase approach is the best path to CBM development: building the sensing environment, performing simple data analysis, model iteration, PoC, and productization.

“We recommend beginning with identifying the target machine type, building the sensor environment, and collecting the preliminary sensor data. This will help you run a quick initial iteration of data analysis,” says Nishimura.

The Future of Digital Transformation in Manufacturing

Condition-based monitoring is just one of the first steps to an autonomous manufacturing future where innovations like edge AI and computer vision truly enable the smart factory. In addition to remote monitoring and industrial predictive maintenance, the latest technologies can automatically tune equipment to best suit operating conditions.

Preventing machine defects and degradation clearly helps manufacturers lower costs and improve margins. Machine builders and SIs win, too. With as-a-service offerings, they create a more sustainable and profitable business model.

 

This article was edited by Georganne Benesch, Editorial Director for insight.tech.

This article was originally published on October 1, 2021

Next-Gen Processors and Virtualization Cut SWaP Tradeoffs

The Columbia Vertol 234 heavy-lift helicopter is used in emergency response and aerial firefighting operations around the world. The aircraft can carry up to 51,000 pounds of water or other flame retardants to wildfires up to 850 nautical miles away.

Despite its vast capacity, in the design of aircraft like the Vertol 234, every ounce matters. With each onboard system and component, aerospace engineers trade range, payload size, or both, which can be the difference in traveling hundreds of miles to make an additional pass at a fire line.

Computational resources come at the ultimate premium in any aircraft. Size, weight, and power—known as SWaP—often dictate which electronic systems are included, and which ones are left behind.

“You might have a communication system,” says Aaron Frank, Senior Product Manager at Curtiss-Wright, a leading supplier of airborne electronics. “You may also have a separate system performing mapping or mission processing. And you may have additional systems dealing with GPS and radar. Typically, these have all been separate boxes which are put into an aircraft. That’s because each system is developed separately and uses all the available processing power to implement those applications.”

Virtualization technology can reduce these SWaP tradeoffs by allowing multiple independent functions to be consolidated onto one system. But it requires a level of compute performance that has not been easily accessible to avionics engineers until now.

Today, increasing demand for functionality in modern aircraft is driving a critical need for virtualized avionics systems that combine separate capabilities into a single ruggedized and compact computer. New advances in the 11th Gen Intel® Core vPro®, Intel® Xeon® W-11000E Series, and Intel® Celeron® processors (previously known as Tiger Lake H) support these designs, and do so without sacrificing functional safety or determinism.

Increasing demand for functionality in modern #aircraft is driving a critical need for #virtualized avionics systems. @CurtissWrightDS via @insightdottech

Expanding Virtualization on a Single Device

Virtualization and the concept of running multiple applications on a single piece of hardware is not new. In fact, Curtiss-Wright and other aerospace electronics suppliers have used virtualization for years. But it hasn’t been effective in aerospace systems for two reasons:

  • Multicore virtualization adds real-time performance overhead with each additional virtualized core. This negatively impacts functional safety because it limits onboard computers’ ability to respond instantly.
  • Processors for rugged embedded systems typically max out with quad-core processors as the maximum device size for running virtualized workloads in aerospace systems. But four cores do not add a lot of value in virtualized or high-performance embedded computing (HPEC) applications.

The 11th Gen Intel® Core vPro®, Intel® Xeon® W-11000E Series, and Intel® Celeron® processors address these issues in multiple ways, starting with sheer performance enhancements. “Octal-core processors mean that instead of just using virtualization for one or two processes or applications, you can run three, four, five, six applications on the processor and still get very good performance,” Frank says.

The platform also adds significant performance improvements via new instruction sets designed for the complex math operations of signal processing applications like radar, as well as applications where AI and ML is applied in aircraft systems.

SWaP Savings in Mission-Critical Designs

Curtiss-Wright’s single-board computer—the 6U VPX6-1961—was designed around the Intel® Xeon® W-11000E Series processor, which incorporates so much performance in its eight CPU cores that fewer modules are needed per system. This translates into direct SWaP savings, as new capabilities like AI and ML can run simultaneously on the same VPX6-1961 board as other applications, instead of on a separate CPU or GPU card, or a whole other discrete system.

Advanced virtualization features on the new Intel platform are key to executing tasks like AI/ML inferencing alongside mission-critical avionics applications on the VPX6-1961. The hardware-accelerated nature of these technologies also helps offset overhead incurred with virtualized workloads, which even extends to connected devices:

  • Intel® Virtualization Technology (Intel® VT-x) isolates computing activities into separate partitions to improve manageability.
  • Intel® VT-x with Extended Page Tables (EPT) accelerates memory-intensive virtual applications by optimizing page table management to reduce memory and power overhead.
  • Intel® Virtualization Technology for Directed I/O (Intel® VT-d) enables I/O-device virtualization to increase the performance of peripherals in virtual environments and enhance security and reliability.

Once implemented, the benefits of virtualization continue presenting themselves on the functional safety front. Because virtualization support is rooted in the Intel processor, it’s easier to robustly partition applications or data running on one core from another. This includes different blocks of legacy code from a formerly discrete system that may already have been safety certified.

This streamlines system traceability and compliance to functional safety standards, which helps accelerate time to market. To further assist in this process, these processors are also backed by the Intel® Functional Safety Essential Design Package (Intel® FSEDP), a TUV-qualified tool for gathering and documenting safety artifacts that span the entire solution stack.

Integrated Technologies Extend Aviation Innovations

Curtiss-Wright has taken full advantage of the gains Intel’s latest CPUs provide, but the advantages don’t stop there.

End users of avionics systems based on the VPX standard may be able to consolidate their equipment today by dropping in the backward- and pin-compatible VPX6-1961 into existing chassis and removing single-function solutions. Because the solution is standards-based and capable of hosting legacy code in securely partitioned containers, compliance headaches should be minimal.

Now, everyone from engineers maintaining legacy systems to designers of next-generation aviation systems have a smooth runway for making every ounce matter.

Retail Storytelling: The Digital Edition

You’re standing in the health and beauty aisle, a potential product in your hand—and dozens more almost, but not quite, exactly like it on the shelves before you. How do you answer the question: “Is this the right one for me?” What if the product told you? What if you knew you were really getting all the options in the showroom of your local store?

Trevor Sumner, CEO of Perch, a leader in interactive retail displays, believes that the in-store shopping experience of the future could look something like this. We discuss the balancing act between in-store and e-commerce shopping; how retailers can personalize the shopping experience; and the future of brick-and-mortar retail.

How have the changes of the past few years affected your business?

COVID has been a technology accelerant in so many different ways. We had focused a lot of our technology on grocery and mass retail, so those investments really paid off because they were effectively essential businesses. And those customers were investing even more in the in-store experience—COVID meant that people needed more insight into what was happening in-store in real time.

For example, they needed ways to connect to their shoppers without a sales associate, because there are a lot fewer sales associates now. And we’re seeing a lot of our customers thinking about how to use our technology to unify commerce and convert in-store shoppers into omnichannel shoppers.

There’s been a shift in balance toward e-commerce during the pandemic. Where do you see that trajectory going now?

I think there’s this narrative that brick and mortar is dead, which is absolute lunacy. Brick and mortar has been increasing 1.5% to 2% year over year. Last year we had a deadly pandemic, which meant that people were literally risking their lives to go into stores. And yet it was a flat year for physical retail. Last year was a boon to e-commerce, but I think this year it will be harder to maintain and extend those gains. Amazon lost e-commerce share, even though they grew.

“I see the future of in-store experience about bringing the same #digital tools into the store, to combine the best of physical and digital #shopping.” —@TrevorSumner, CEO of @Perchexperience via @insightdottech

They didn’t grow as fast as the brick-and-mortar stores that had an e-commerce presence. And part of that is because the physical stores themselves delivered about 40% of e-commerce orders for the first time. There are certain industries where the store has to be a key part of customer acquisition, ordering, and fulfillment. And so it becomes less and less helpful to think about e-commerce as separate from brick and mortar.

What do you see as the role of the physical store going forward?

We crave shopping to connect with products; this connection between people and products is fundamental to shopping. I think increasingly we’re seeing people think about the store—the back of the house—for fulfillment. But the front of house isn’t going away; it’s just going to be optimized in all these new and exciting ways, in part powered by IoT.

Having a local Walmart everywhere so that they can more optimally ship to you is really incredibly interesting on a business level—for margins and for costs and optimizing profits. But it’s definitely not about optimizing the shopping experience.

What can stores do to really enhance connectivity with their products?

The way you connect with products or learn about products online is you click on them. And when you click on them, you go to what’s called a product detail page—or PDP in the industry. And you get videos, ratings, and reviews—and all this stuff that you’re looking at for research. We keep telling ourselves that the reason that we go into stores is because it’s better for product discovery, but, ironically, it’s also the only place where you don’t get that PDP, or product-level, detailed information.

So I see the future of in-store experience about bringing the same digital tools into the store, to combine the best of physical and digital shopping. So I can touch the product—I can get the joy of holding it in my hands. I can look at multiple different products at once in a physical and real way.

If you look at the brands that you really feel that emotional connection to, fundamentally that’s because of the stories that are being told. In a recent study, you are 50% more likely to have an emotional connection with a brand in-store than online, if you just do e-commerce.

Fundamentally, what we’re doing at Perch is we use computer vision to detect which products you pick up at the shelf. And the moment you pick it up, it wakes up and starts telling you about the product. It could be videos, ratings reviews, other complementary products—maybe comparing products in a product family—providing all the tools you need to understand whether you really want that product and that product’s right for you. And so, to me, that product pickup is the same as clicking online.

Now those clicks at the shelf that you do when you pick up a product and look at it—that’s an expression of interest. And now we can provide the right message at the right time, and help brands connect on a meaningful basis with the shoppers that are considering it.

How does this concept apply to something like a refrigerator?

I think of what we do as product-level marketing. Say you just bought a house, and you go to Home Depot to buy a fridge, and there are over 300 different fridges. They can’t fit 300 fridges on the showroom floor, so how do you pick a fridge? But if you go online, you can actually visualize the different configurations of the fridge. Are you doing a double door? What type of shelving configuration can you have? Is this one efficient? Does it have different finishes that can match your kitchen? There are so many different questions that you can answer online that would be very hard to do with physical retail on its own.

But think about that for every product set. We’re working with Johnson & Johnson to bring out their Skin360 tool. It’s a front-facing camera that looks at your face and says, “Okay, based on your skin type, based upon your wrinkles or your sun spots or dry spots. . .” It does an analysis of many different points of your face and asks you a couple questions about what you care about most, and says, “Here are the products that we would recommend.”

Whether it’s finding the right refrigerator, the right electronics, the right computers, the right TV, the right deodorant—all of these things require some digital content. And we’re trying to bring that to the shelf, where 85% of transactions actually occur.

And so one of the things that’s really remarkable about what we’re seeing right now is that about 1% of digital media spent is happening in-store—where those 85% of transactions occur. So, there’s a multibillion-dollar shift to driving digital in-store, and it’s going to be done in a couple of different, interesting ways.

I think there are going to be digital signage networks that are on the walls, that are basically banner ads. There’s going to be digital at the shelf that’s contextual, reacting to what products you touch. Or it’ll do front-facing cameras that do demographic segmentation. So me as a 45-year-old male will get a different message than a Gen Z woman.

It’ll be exciting, it’ll be personalized, it will be contextual. That’s the other area that’s really driving this expansion—is to start understanding how shoppers shop in-store. And now that we’re shining a light on that with sensors and IoT data, it turns out some of the things that we’ve always thought to be true aren’t really true. And it’s going to lead to a revolution in the way we think about the in-store experience.

What’s an example of an assumption that people have been using for years that’s being debunked now?

If you ask anybody in retail what is the most valuable area to place a product on an endcap—that is, the short end of an aisle—they will say, “Eye level.” In fact, they might say, “Eye level is buy level.”

And so I asked them, “Is it true?” All of a sudden they’re like, “I don’t know. That’s what I’ve always been told.” And the answer is, while it is true that being at eye level is beneficial—it shows about a 25% engagement and sales lift to be at eye level versus at middle or lower level—it turns out that the edges of the endcap are more valuable. They show about 35% to 50% sales and engagement lift. And nobody knew that, in part because earlier studies just looked at the main aisle itself.

And with front-facing cameras we can start testing these things, and actually provide you a report that says, “Here’s the content that influences women and men by age, demographic, etc.” And it’s all anonymous; it doesn’t record your identity. So it can do this in a way that doesn’t sacrifice privacy, but helps the brands get all this data to this black box where most of their sales occur.

So stores continue to be the center of where it’s at; but we’ve got to merge some of these new behaviors, new desires, new demands for information. There’s a lot of social shopping that happens in-store, where people text their friends or take pictures of products. How do we enable social shopping as fundamental to the physical shopping experience? There are fascinating ways that you can do this using digital and screens and mobile, and integrating them all together.

What do you see as the path forward in bringing all of this rich data together?

I think it’s the reliance on the mobile phone. Once you send people onto their mobile phones, you are sending them out of the shopping experience that you own, and onto the World Wide Web—this is why all the major retailers are launching loyalty programs. I think the question is really more about how the data gets put together in a way that can make these experiences more cohesive.

How do we determine context so that we give you relevant things? The shoppers are saying: “Tell me about the thing that I’m interested in.” We think that the most important signals—the products are you interested in right now—are the ones that you’re touching. To me, there’s a balance between not taking people out of the physical shopping experience and just throwing them onto the mobile phone. It has to be blended together.

How does the messaging work?

All the real content that you need to help sell a product is already there. Ratings and reviews—already there. Product-comparison sheets, already there. If you just provide the basic levels of information that we can find online, the in-store experience gets enhanced fivefold. The question is, how do we bring it in-store?

A lot of people just try and put their website in-store, and it’s just super frustrating to a shopper because, if I wanted to go to your website, I would have gone to your website. The shopping behaviors and interaction modes are much different. You’re not going to click six, seven levels deep into a website in-store; the most important information has to be bubbled up immediately.

In the short term, this is going to be driven a lot by brands; but long term, it’s going to be driven by retailers. If you look at it, many of the major retailers are investing very deeply in these types of digital networks.

So, right now, I’m an arms dealer to individual brands and some retailers—I’m helping all brands to deliver their digital messages and connect with their customers. Long term, I think retailers are going to be the arms dealers, and provide this platform for interaction, meaningful engagement, and data to each of the brands.

We’re collecting all this data that is going to be extraordinarily valuable. I think that’s why, in part, I’m so excited about stores. Stores now are the dominant form and channel—what are they going to be when we make them more profitable, more efficient, more engaging, more educational, more integrated into personalization? All those things—how much better are stores going to be? When you paint that picture, I couldn’t be more bullish on the bright future of brick-and-mortar retail.

Related Content

To learn more about the future of retail, listen to our podcast In-Store Shopping Matters More Than You Think.

The ABCs of EdTech

It’s common knowledge that technology has touched every aspect of the modern world, but education seems like an exception to that rule. Not for much longer. The pandemic has proved that lined paper and No. 2 pencils are no longer enough to do the job—even with the most gifted teacher at the front of the room.

But how can cash-strapped schools and school districts bring the education sector into the tech age without breaking the bank—or the patience of overloaded teachers?

Manuel Edghill, Head of Software, Growth and Partnerships at ViewSonic, a global provider of computing, consumer electronics, and communications solutions, and Chris O’Malley, Director of Marketing for the Intel® Internet of Things Group, have some bold ideas about how EdTech can support students, teachers, and district budgets. They’ll tell us how video analytics, university prototypes, and long-term vision can build a better, smarter, more effective classroom.

To hear the full conversation, listen to our podcast EdTech as a Social Good with ViewSonic and Intel®.

What key challenges does the education sector face today? And how can EdTech help?

Manuel Edghill: A lot of educators are having a challenge competing with technology, and with the attention span that is outside the classroom. You have the TikToks, and you have all these really quick bursts of information—how do you translate that to education, where you need a longer attention span? How do you engage your students inside the classroom, and how do you do that when your students are remote?

Chris O’Malley: When I’m presenting to educators, I often say that we live in a world of screens, we live in a world of video. It’s dynamic. It’s interactive. And that’s what children are used to. That’s what they thrive on. That’s what engages them. And then if you walk into a classroom that has paper displays or paper materials without any video, without any interactivity, without any of the dynamic digital content that these kids are used to, they kind of shut down.

“For #technology to be adopted in the classroom, it has to be a #teacher first and technology second mentality.”—Manuel Edghill, Head of Software, Growth and Partnerships, @ViewSonic via @insightdottech

But if we bring it down to the issue we’re facing right now, teachers have been thrust very quickly into balancing in-person learning with virtual learning—and with hybrid learning, where sometimes they’re trying to teach in front of 10 people in class and 10 people who are quarantined at home because of a COVID issue.

I think technology can address all those different issues that teachers are facing right now, in an efficient manner that’s good for the students and good for the teachers.

Manuel Edghill: For example, there are some assistive technologies we are working on that would assist teachers in figuring out the students who are not being engaged—because they’re a little shy, or maybe they have some learning difficulties. A lot of these features that are assistive are available by default. And since everybody has the same technology in that particular classroom, the students will be able to get help without having to draw attention to themselves.

Can you tell us more about how ViewSonic’s technology allows teachers to get real-time feedback on what kind of impact the lesson is having?

Chris O’Malley: We call it video analytics. The technology that ViewSonic has can identify for the teacher that, say, in the second 15 minutes of a class the students’ attention dropped dramatically. It’s either the natural attention span of children, or maybe the course content needs to be improved for the second half of the class. Maybe there’s a need to be interactive at that point.

Manuel Edghill: I want to clarify that this type of technology is teacher first, teacher focused, which means we’re doing our very best to help the teacher better assess their classes and their students. And we make it absolutely anonymous, so as not to pinpoint a particular student or teacher. It’s more to get an overview of the class itself.

Chris O’Malley: One of the things that Intel does to help ViewSonic do that is we build a lot of models of analytics that allow you to determine if a student’s happy or sad. But it’s done entirely at the edge, and any identifying information is entirely deleted. The only thing that would ever go to the cloud is happy student, sad student. There’s no information attributed to it; it’s designed to be 100% private. You have no idea who’s happy or sad, but you can get an idea if the students are engaged or not from that.

How do you see technology supporting teachers better—while allowing them to focus on educating, not on the educational tools?

Manuel Edghill: There’s a huge percentage of time that goes into prep and admin. If technology could assist in these areas, that would be a huge benefit. Teachers like to share a lot of content, so technology makes this very, very easy. We have solutions where teachers can save all their lessons; they can embed videos; they can write quizzes, and then they can share them with their fellow teachers.

Schools also benefit a lot, especially in budgeting or resources, because there’s this huge deficit or inequality in economics or education or access to resources. One cool thing that we have seen is that schools that have tech, they partner up and they collaborate with each other.

Chris O’Malley: There are some applications where a student can, for example, do a math problem online, and they input line by line how they would work through the problem. Now, they may get the problem wrong, but rather than saying, “You got the problem wrong,” the answer might be highlighted to the teacher and say, “This student understands this, but they didn’t quite get the associative property. Maybe you need to give them a little bit more reference on the associative property.”

And then the ease of course preparation, the digitization of materials. You’re seeing that with all the educational publishers—they’re really, really improving their online digital content. That’s going to help the teachers.

A big concern for education is cost. How can this technology help a school’s or district’s budget?

Manuel Edghill: Once we provide a solution and installment, a lot of the overhead applications that teachers were using, they don’t need to have them anymore. A lot of savings happens there.

For example, we have a virtual classroom called myViewBoard Classroom. We did our very best to replicate a true physical classroom in a virtual world. The teacher can manage their groups and discussions and students, and they know who’s doing what. And we also have a video-assisted learning platform called myViewBoard Clips—it’s like YouTube, but a lot better. You have quizzes, you can share lessons, you have videos that are filtered just for education.

When we provided this one solution to a school, the school could get rid of two different applications that they had previously—for a virtual classroom and also for a video database. They saved costs in those two additional fees that they didn’t have to pay.

The second thing is just the time saved. We also have a lot of device management and app management software that saves a lot of time for that IT guy who has to run around, or for that teacher who needs to make sure that everybody is on the same page in a particular topic or app.

Chris O’Malley: The thing I would add is that the cost that a school district faces for having the print editions of books and everything else is quite expensive, and they have to be replaced on a regular basis.

Manuel Edghill: The government, especially in the US and in Europe—they have these huge funds that are directly focused on the EdTech segment. What we’ve done also is we’ve helped out some of our clients and channel partners with assessing the rollout so that it aligns with funding from the government, and it’s been quite successful.

How is ViewSonic enabling the adoption of EdTech?

Manuel Edghill: We have a whole team that does professional development. They’ll walk you through everything to make sure that the teachers, the IT, even the students are well equipped on how to use our technology.

We have also worked with some universities, and we made them these high-end EdTech classrooms. And the whole purpose of these things is to equip the classroom or a particular learning lab with some technology and use it. We then partner with teachers who do both an in-class and a hybrid lesson at the same time. And they use the technology, and then they invite other teachers.

And we fund most of that stuff. We collaborate with Intel in some of these things to sponsor them. And this is not only to train the teachers and show them that the tech is not as scary as they think. For the team that I am in, we use these classrooms to listen and observe, and to see what needs to be improved.

Chris O’Malley: This is an area where I think ViewSonic does a really good job. They produce very sophisticated hardware, and the software to go with it. But it’s not just a bunch of software engineers sitting in a lab creating stuff that then gets handed out to teachers, so that the teachers are like, “Yeah, how do I use this?” They work hand in hand with teachers, with the graphic user interface people, and figure out what the use cases are that teachers need, or what things are important.

How can educators and schools get started on this EdTech journey?

Manuel Edghill: If a school doesn’t have a long-term vision for the rollout of EdTech, and how it’s going to be used and who is going to benefit, it’s going to be very tough to be successful with it. You need senior support and at least a two- to three-, even a five-year vision of what it will be.

And I say this, because a lot of the time schools will buy the ViewBoard because it’s new. “We have the budget. We have the funds. We’ve got to use it somehow.” And then it hangs on the wall and nobody uses it.

Chris O’Malley: The vision is super important. What does the school need? What do the students need? What do the teachers need? What are the use cases that we need technology to help us with? And then even go further and ask: What are the business processes we’re going to put in place to make sure that this technology is utilized properly?

And then go step by step: Is it connectivity that we need first? Is it in-classroom technology that we need second? Is it student technology that we need third? Outline every one of those, and then go figure out what’s needed, and then use technology to solve that problem. But if you just throw in cool technology, most of the time you end up creating more problems.

Is there anything that you’d like to add?

Manuel Edghill: I’d like to give a reminder that, for technology to be adopted in the classroom, it has to be a teacher first and technology second mentality. Technology should be an augmentation and a support, a complement—something that assists in the delivery of an exciting lesson.

Chris O’Malley: If you’re a great teacher, you’re going to be a great teacher. What you can do is take this technology and allow yourself to be a better teacher, or allow yourself to reach more students, or reach students in a different way, or to engage them further.

And I think our children need it. We live in a world of technology. If they don’t understand how to use technology, and experience technology in school, when they come out into the workforce they’re going to be behind. But we have certainly got to always remember that it’s a tool and an aid to a really good teacher and to the whole process of education.