When Akamai was founded in 1998, co-founders Daniel M. Lewin and Tom Leighton, were not only inventing the first content delivery network (CDN), but also creating an early prototype for edge computing. Akamai’s network of servers distributed around the world has allowed customers to accelerate their digital content by distributing it from locations close to the end user.
As edge delivery has grown beyond simple content, traditional CDNs face new challenges. Dynamic sites use more complex logic to display complex rendering that often can’t be cached, and devices are typically farther away from origin servers and are requesting content that is often not optimized for them.
The CapEx Model
Traditional CDNs have typically adopted a Capital Expenditure (CapEX) model as opposed to an Operational Expenditure (OpEx) one. The business model of the CDN has relied on an in-house approach to data centers, which historically has required large CapEx investments to purchase space, equipment, software, and support all of the resources required to maintain the assets.
If a customer has very large data processing and storage needs, a traditional CDN would develop a strategy involving spinning up new data centers, buying and positioning new servers, and purchasing extra capacity at different physical locations around the globe as close to where the customer’s users are as possible. In that model, physical hardware is placed in a particular location or locations, and profits are made by optimizing the site and densely packing in as many users as possible.
The OpEx Model
In an OpEx model, providers choose not to invest in their own infrastructure and instead leverage existing hardware from other providers (i.e. cloud hosting, bare metal, telco, 5G). By not being tied to their own fixed networks, providers using an OpEx model are able to take a more demand-based approach, using flexible strategies and workflows to best serve each customer’s unique requirements and maximize cost efficiency.
According to Gartner, global spending on IT slowed down last year, in particular with regards to device and data center equipment. John-David Lovelock, research VP, says software is expected to be the fastest growing market across 2020, reaching growth at 10.5%. The research agency notes that organizations with a high percentage of IT spending dedicated to cloud adoption “is where the next-generation, disruptive business models will emerge.”
The Growing Edge Infrastructure Network
There is a vastly growing global network of edge infrastructure - edge data centers, hardware and networks - from major cloud providers like AWS, Google, and Microsoft, to more niche players like Digital Ocean, Packet (a division of Equinix), Vapor IO, and RackCorp, not to mention everything going on with the 5G rollout.
When CDNs and edge compute platforms talk about a multi-provider model, this is referring to the ability to target and access the benefits across different providers. Being locked into a single provider means that you’re only able to move and scale within a fixed network. Having access to a diverse set of edge infrastructure offers many advantages:
- Reliability: When a provider experiences downtime, you can failover to another provider.
- Scalability: Spin up/down infrastructure to meet real-time demands.
- Expansive edge footprint: Access to an aggregated network of infrastructure provides a global edge footprint beyond what any single provider can offer.
- Flexibility in moving workload: Making decisions about where to move workload is significantly easier than if you had to factor in the costs of building infrastructure from the ground-up.
For many, the advantages of being able to freely move capacity where it makes the most sense to serve customers provides a more attractive model. This is the camp that we sit in at Section. We see the importance of giving developers access to infrastructure across a diverse set of providers so that the full potential of edge computing can be realized.
When we talk about flexibility, what we mean is the ability for developers to deploy workloads onto any piece of hardware where we can run a Kubernetes cluster. That might include a standard cloud hosting provider, or, alternatively, it might be a cell phone tower location. As an edge compute platform, we want boundless ability to run workloads in the locations that are most suited to meet the demands of our customers’ applications. We like to refer to this as, “Any workload, anywhere.”
The Role of Open Source and Industry Cooperation in Enabling OpEx Models
Open source technologies and industry cooperation are further enabling the OpEx model for edge computing. Advancements in areas such as infrastructure-as-code and cloud-native technologies have allowed Section to build a platform that extends the benefits of the OpEx model to our customers.
When Section joined the Kinetic Edge Alliance last year, Cole Crawford, CEO at Vapor IO, spoke to this.
“Vapor IO’s Kinetic Edge will deliver the next wave of network and data center infrastructure, vital to support 5G, autonomous vehicles, and many other exciting technologies and applications. To meet the needs of the future, our industry must collaborate.” Cole Crawford, CEO at Vapor IO
We must bring together all of the products, platforms, and technologies, such as those offered by Section, in order to help make operators and developers successful with edge deployments.”