A Collaboration to Bring the Future, Faster
For many years now we’ve worked with customers tackling some of the most complex challenges in some of the largest IT environments—from modernizing their applications for cloud native speed, elasticity, and scale to architecting and building their infrastructure for multicloud flexibility. As we look ahead to the next decade, we’ve been asking ourselves, what’s next? What is the new frontier for them?
Last year, we noted in our 2019 State of Multicloud Report that our collective vision of what is possible is rapidly evolving: “We can build an application or a service once and run it anywhere—from the data center to the cell tower to the sensor. We can build composite applications that communicate across clouds, leveraging an array of cloud services. And we can believe that someday application workloads will move freely across clouds. All of it is driven by applications and the businesses they drive.” In other words, the never-ending quest to improve the customer experience continues. (Check out the 2020 State of Multicloud Report.)
Today, however, our ability to collect real-world data is transforming that pursuit, creating new opportunities for customized experiences in real life, more informed decision-making, and intelligent automation. This is the edge: from satellite locations to IoT devices, compute has expanded well-beyond the data center to transportation, energy supply, agricultural equipment, factory floors, retail, and more. But all that data has to be processed and analyzed for meaningful insights.
“The edge is all about getting processing close to where data is being generated, improving the customer experience. The closer you get to the edge the more you reduce network latency, but the tighter the constraints on compute and storage resources,” says Shmuel Kliger, Turbonomic President & Founder. “Turbonomic solved this challenge with Application Resource Management, our software finds the desired state between these competing tradeoffs that continuously fluctuate with changing resource demands. We are excited to be working with IBM to accelerate the adoption of edge computing.”
IBM has announced an autonomous edge solution, which enables a single administrator to securely manage the scale, variability and rate of change of application environments across tens of thousands of endpoints. The IBM Edge Application Manager is a full-life cycle edge environment that enables you to create, deploy, run, secure, monitor, maintain and scale business logic and analytics applications at the Edge. It can run anywhere and manage workloads on virtually any edge endpoint, including servers, gateways, and devices. It brings enterprise-grade security to Edge deployments with spoofing prevention, tamper proof and encrypted capabilities. Edge endpoints run on the Red Hat OpenShift enterprise Kubernetes platform or Docker devices, giving you choice and flexibility to extend from any public and private cloud to any edge server and device.
In collaborating with IBM we have a teammate that understands and values the long-game, meeting customers where they are today and working with them to accelerate the delivery of what is possible.
"The convergence of 5G and edge computing will spark a new level of innovation,” says Evaristus Mainsah, general manager, IBM Cloud Pak Ecosystem, “and this in turn will enable and fuel a broad ecosystem of providers to co-create for a growing set of edge opportunities. We are excited about the value that our collaboration with Turbonomic can bring to our joint clients needing to automatically manage resources optimization in edge systems to run edge analytics and applications.” Read more about how IBM is making the promise of edge a reality with the help of their partner ecosystem, including Turbonomic.
When it comes to edge computing, a confluence of trends and technologies is raising its profile as a feasible business use case:
- Containers/Kubernetes enable the portability and orchestration of processes across clouds, local data centers, and local devices.
- Service mesh enables application services to communicate across this heterogeneous mix of infrastructure
- 5G enables faster data speeds between the cloud and edge endpoints
- And our ability to collect data at the source, created by people, places, and things, is rapidly improving
The opportunities that edge and IoT offer are undeniable. Imagine elevators perfectly timed to greet you when you get off a train; crop irrigation perfectly controlled based on weather and climate conditions thereby minimizing water waste; real-time identification of bottlenecks and inefficiencies on a factory floor; customized digital signage as you enter your local bank; perfectly timed traffic lights; and of course, requiring very little imagination today, self-driving cars. It’s no wonder then that Gartner predicts that more than 50% of large enterprises will deploy at least six edge computing use cases for IoT or immersive experiences by year-end 2023, versus less than 1% in 2019. Source: Gartner “Exploring the Edge: 12 Frontiers of Edge Computing” May 2019 (ID G00388219) Sounds, inevitable, right?
Controlling Complexity: From Multicloud to the Edge
As much opportunity as edge computing promises, there are real challenges. Our 2020 State of Multicloud Report found that the biggest barrier to edge becoming a conventional use case is the complexity of managing highly distributed services and data.
The modern application will be a mesh of services operating and communicating across three main tiers:
- Endpoints such as sensors and IoT devices that collect data and perhaps execute some real-time processing, ex. a self-driving car stopping at a stop light.
- Local “edge” data centers that may run analysis and gather insights for consumption onsite, ex. a branch office or factory floor.
- Cloud with theoretically unlimited capacity where big data processing and AI/ML can be executed.
But tradeoffs exist across these tiers. Data can be processed faster locally, but there are limitations in compute and storage. The tradeoff is to have data processed elsewhere, but introduce network latency in getting that data to where it needs to be. You have to bring the data through these tiers and the question becomes what processes can you move where? How much can (or “must,” think self-driving cars...) be processed in real-time at the end-point? How much can be processed locally where you get the benefits of minimizing latency in data processing, but have limited compute and storage capacity? Alternatively, you can process that data in the cloud with its theoretically infinite capacity, but you take a hit on network latency, as determined by the capacity of the network. Ultimately, it’s a tradeoff of limited compute and storage capacity at the edge vs. network delay that is the result of moving that data to be processed elsewhere. These resource tradeoffs have existed since the dawn of virtualization… it’s only more complex today.
Applying Economic Principles Solves Edge Computing’s Resource Tradeoffs
Since 2009 Turbonomic has applied economic principles to manage competing resource tradeoffs. It is how our software is able to continuously assure performance, while maximizing efficiency—for any application, on any type of infrastructure. Virtualization was the first proof point to validate the power of this abstraction. A decade and 2000+ customers later, we have since applied the same data model and analytics to solve this challenge in cloud and containers. And today we look to again apply economic principles to edge computing use cases, automatically managing resources across cloud, edge data centers, and edge devices so that your highly distributed applications always perform. Our work with IBM will enable our customers to accelerate their adoption of edge and leverage its benefits for their business and their customers. We can’t wait to see what the next decade brings.