We use cookies on this site to enhance your experience. Visit our Privacy Policy for more info.

Edge Computing: Tech’s Next Trillion-Dollar Opportunity

George Mathew, Lonne Jaffe | January 26, 2022| 1 min. read

Even as technology continues to move into the cloud, a simultaneous shift is already underway in the opposite direction as computing increasingly moves to the edge. One key reason: by 2025, the number of connected devices in use is expected to exceed 56 billion units, according to IDC. Edge computing is not a successor to the cloud, but another expansion of technology’s reach that represents an exceptional opportunity for investment and growth.

As edge-device and computing adoption continues to skyrocket, organizations need to process the proliferating data from these devices locally so they can act on it in real-time. This will require investments in edge-adapted infrastructure and related tools and platforms. Enterprise technology is about to be transformed yet again — the need to process and analyze some data at the point where it is both created and consumed is becoming a business requirement as mandatory as moving other parts of the technology stack into central locations in the cloud.

Organizations already must deliver compelling customer experiences with edge devices based on data that must be continuously refreshed. The alternative—uploading and downloading data to and from a cloud service or local data center—often takes too long. Whether the data is on a consumer’s smartphone or a manufacturing floor, organizations need to process and analyze data as near as possible to the point where real-time data-driven response is required.

The competition to provide these capabilities is already intense. Many vendors—ranging from cloud service providers to telecommunications carriers to longtime providers of server and storage infrastructure—view edge computing as their next trillion-dollar opportunity.

Why We Compute at the Edge

Edge computing has been around for decades in various forms. There are countless examples of standalone software being deployed on, for example, devices used on factory floors that automate various aspects of a manufacturing process. In recent years, many of those systems have been connected to the Internet to enable organizations to collect and share data more easily. The challenge now is that the amount of data being generated at the edge often exceeds the point where it is practical to transfer that volume of raw data across a wide area network (WAN). Platforms that can process and analyze large amounts of data at the network edge are required. The aggregated results of data processed on an edge platform can then be shared more efficiently with other applications running in the cloud, in an on-premises IT environment, or even on the network itself.

Low-latency requirements are today’s biggest challenge. In many scenarios, an application running on a gateway at the edge of a telecommunications network needs to be able to respond in a fraction of a second to a mobile computing application’s request for data. In other circumstances, an edge application is driving an automated factory operation that needs to dynamically adjust to analytics processed at the edge of an extended enterprise.

There are even more advanced use cases, some of which are described below. These include in-car computers that process computer-vision data in real-time for driverless vehicles and robotics applications that promise to transform manufacturing.

Hardware Advances Enable Edge Computing

The often competing demands of low-latency network connections, high-power computing, and large storage requirements are driving innovations in edge hardware.  Electric carmaker Tesla, for example, has added custom-made devices to its processors capable of pre-processing massive amounts of data, performing machine learning inference to make quick driving decisions and predictions, and even doing some deep learning model training on the vehicle itself–before sending a subset of the data to central systems to do even more training.

Earlier this year, Tesla unveiled a processor with six billion transistors. According to the company, its device boosts performance by 21 times compared to the Nvidia GPU it replaced in its Tesla Model S, Model 3, and Model X models. These powerful devices are used in addition to CPUs from traditional providers such as AMD and Intel for workhorse applications like infotainment.
With this computing power at the edge, a Tesla car can process sensor data that allow it to recognize and self-pilot around pedestrians, other cars on the road, emergency roadside signage, and potentially hazardous moving objects. This sensor data is also processed with pre-loaded mapping data and GPS connectivity. Additionally, Tesla’s in-car computers relay information over Tesla’s network to massive storage facilities where the data is analyzed to improve Tesla vehicles’ self-driving and other functions. Improvements to the vehicle’s software stack are then downloaded to it over the network.

Healthcare is an example of a sector with similar needs for advanced machine learning-taught devices and software. Images from MRI and other scanning devices with inference-computing capabilities deployed at numerous locations would also require computing power and network infrastructure similar to that which Tesla’s cars need. Most hospitals, however, don’t have the interest or wherewithal to build their own powerful edge computing hardware and will need to consume this capability from technology vendors.

As advances in semiconductor circuits approach seven nanometers and smaller, the ability to run applications at the edge in an energy-efficient manner becomes more realistic for a wider range of organizations. With each new iteration of processors, the total cost of edge platforms will continue to decline while the amount of horsepower available to run applications steadily increases.

Networking at the Edge

Regardless of how much data is processed and analyzed at the edge, the sheer volume of data creates more demand for network bandwidth and, consequently, the potential for bottlenecks. One main bottleneck is at the backhaul—the stretch between the edge towers and central servers and network. Whether tomorrow’s edge network infrastructure will consist of 5G, more ultra-powerful WiFi connections and devices, low-latency daisy chain systems, fiber systems such as those that Google is developing, satellite connectivity, yet-to-be-created technology, or a combination of these are still unclear–but one thing is certain: Data volumes are increasing faster than network bandwidth.

Today, organizations have a choice of connecting edge platforms among several available wired and wireless network alternatives. The number of use cases involving edge computing is expanding in part because wireless 5G networks make it possible to share data across multiple interconnected edge platforms. For example, manufacturing employees wearing augmented reality headsets that can share data and analytics in near-real-time not only with their coworkers on the floor but with colleagues hundreds of miles away.

As telecom carriers continue to virtualize the network infrastructure they employ to deliver 5G services, the cost of delivering them should steadily decline across a widening range of geographic areas. Today, a 4G network can support roughly 4,000 devices per square mile. A 5G network could pack up to three million devices in the same area.

It’s still early days as far as 5G is concerned. A Capgemini survey of 1,000 industrial organizations that plan to incorporate 5G into their operations finds less than a third (30%) have progressed to trials and real-world implementations. Yet, carriers are already at work defining the next generation of 6G wireless standards that promise to deliver another giant leap for wireless networks in another decade.

Regardless of the network employed, organizations will still need to strike a balance between how much data needs to be processed, stored, and eventually disposed of at the edge versus the amount of aggregated data that needs to be transferred over a WAN and incorporated within various other platforms that process data centrally or function as systems of record.

Factoring in Security

Arguably, the biggest obstacle to edge computing is security. Each time an organization deploys an edge platform, the attack surface they need to defend grows. Many of these edge platforms are also managed by operational technology (OT) teams that don’t have as much cybersecurity expertise as does, say, the data center security team at Google. Most organizations are trying to meld their OT and traditional information technology (IT) teams in an effort to deploy, manage and secure edge platforms. However, the cultures of these two teams are significantly different. It takes time for organizations to harmonize the efforts of these two groups.

More challenging still, cybersecurity expertise is hard to find and retain. Cybercriminals have discovered how vulnerable OT environments are. Attackers affiliated with various nation-states have already demonstrated that industrial control systems controlling electricity grids and other critical infrastructure systems are primary targets should war erupt. From their perspective, each edge computing platform connected to a network is yet another vehicle through which malware can be introduced into an IT environment. It only takes a few minutes for malware to propagate laterally across an entire platform.

Ultimately, organizations will soon spend billions of dollars securing edge platforms. Once the edge systems are locked down and secure, edge processing can improve overall cyberdefense—processing more data at the edge without moving it around as much can dramatically reduce the attack surface.

How the Edge Computing Ecosystem is Evolving

Once the right level of infrastructure is in place, the next challenge is building applications. The widespread adoption of container technologies has allowed developers to build applications using artifacts that require much less memory and storage. Most container environments today, like those based on Docker and Kubernetes, are decentralized, in a sense, but are run in centralized environments like data centers. But work is progressing on even smaller container formats that make it even easier to deploy application software on, for example, an oil rig hundreds of miles from any server or gateway.

Lighter-weight instances of the open-source Kubernetes container orchestration engine are starting to gain traction. Kubernetes, by surfacing a consistent set of application programming interfaces (APIs) regardless of what platform it is deployed on, provides an opportunity to centralize the deployment and management of modern microservices-based applications across highly distributed computing environments.

Industry API specifications and frameworks are also beginning to emerge for machine learning, data science, and other technologies that empower edge computing. An example is on API, supported by Intel and others, which provides a framework for developing edge applications.

Providers of development platforms are also racing to create frameworks that make it simpler to build applications within edge architectures that seamlessly invoke backend application services running in the cloud. Distributed event-streaming platforms like Apache Airflow and Kafka can play a role in allowing edge computing platforms to share data with those backend platforms at scale. Some organizations will also invoke the services of content delivery networks (CDNs) to improve the performance of applications deployed across a global network of points-of-presence (PoPs) that have already been constructed.

Finally, stateful applications deployed at the edge can require local databases that provide access to persistent forms of data. The bulk of container applications deployed today are stateless—they store data on an external storage system. Building stateful applications that access data in containers on a local Kubernetes cluster is more challenging.

There are already millions of developers familiar with container platforms who are standing by until improved hardware and software simplifies and secures deployment of highly portable applications as far out on the edge as possible.

The Coming AI and Data Management Challenge

The most exciting edge innovations ahead will likely be driven by the rise of machine learning and AI. Today, AI models are typically trained within centrally located cloud computing platforms. An AI and machine learning inference engine can then be created and deployed in a production environment, where a trained AI engine deduces what actions to take as new data is discovered.

Over time, those inference engines are subject to drift as the amount of new data collected exceeds the parameters on which the original AI model was based. Data science teams working in collaboration with IT and OT teams then need to train a new AI model to replace the inference engines at the edge. Advances in new AI and machine learning capabilities, in conjunction with lower-power and higher-performance processors and other components to help lower costs, can improve the accuracy and performance of prediction systems, but typically doesn’t reduce the degree to which this drift happens.

If AI models can be at least partially retrained at the edge, this can help to reduce prediction accuracy drift. However, the software and hardware infrastructure required to retrain an AI model efficiently within an edge platform is still very early-stage. That said, as the cost of increasingly high-powered hardware declines and edge machine learning software systems improve, the ability to train at the edge will become more feasible.

The Economic Impact and Investment Opportunity

The global edge computing market was valued at approximately $3.6 billion in 2020 by Markets and Markets. The research firm forecasts this market will reach $15.7 billion by 2025, representing an astounding 34.1% compound annual growth rate (CAGR).

McKinsey forecasts that the economic value generated by use cases involving the internet of things (IoT) applications alone will be somewhere between $3.9 trillion and $11.1 trillion per year by 2025. Add in all the other use cases for edge platforms and the potential for positive economic impact becomes enormous.

Increased computing at the edge will be a huge technology market in its own right and will also drive increased demand for centralized cloud computing. Gartner notes that by year-end 2023, 20% of installed edge computing platforms will be delivered and managed by hyper-scale cloud providers. The other 80% of this massive emerging market opportunity is up for grabs.

Product-Led Growth: The New Paradigm in Software Selling

READ