A Checkpoint at a Pivotal Moment

For anyone who listens to the NPR podcast How I Built This, you’re familiar with the standard question that host Guy Raz asks every founder he interviews, “How much of your success do you attribute to your hard work and determination, and how much to luck?” While the question typically elicits humble reflections describing a combination of the two, it’s interesting to contemplate that question, not in retrospect, but rather as a company on the precipice of extreme growth potential. Potential carries heaps of responsibility - responsibility to make the right decisions at the right time and execute, execute, execute.

All companies, startups and Fortune 500 alike, face pivotal moments in their growth cycles where key decisions can lead to hockey stick growth or umm, otherwise. These decisions span across the entire organization - product vision and development, sales and marketing strategy, talent acquisition and retention, financial strategy and management, and everything in between. Ultimately, it’s the combination of these cross-functional decisions that determine our destiny.

So we ask ourselves, are we in the right place at the right time doing the right things?

Edge Computing: The Opportunity and Challenges Ahead

As with the early stages of any category, hype is inevitably accompanied by uncertainty, and this is certainly true of the edge computing market.

Is the Opportunity Massive?

Gartner has predicted that the “Edge will eat the Cloud”. Given that Gartner estimates Cloud is currently worth more than $200B per annum and growing at more than 17% CAGR, this is no small prediction. The Linux Foundation expects Edge will overtake Cloud in just 5 years. So, the opportunity at the Edge is expected to be huge. In fact, Chetan Sharma believes that the Edge industry could be worth $4 Trillion by 2030.

What is driving this opportunity?

Moving workloads to the Edge allows engineers to build more scalable, more secure and faster applications, as they can take advantage of:

  • Reduced Latency - Executing and delivering closer to the end-user;
  • Reduced Data Backhaul - Handling massive volumes of data at the distributed Edge rather than shunting back over expensive networking to bloated central infrastructure;
  • A Security Perimeter - Ability to run in-depth defense with specific mitigation software and networking defenses; and
  • Volumes of Cost-Effective Compute Power - Computers distributed in execution locations are ready to be leveraged.

The transition to Cloud happened relatively quickly. Engineers are now used to the abstraction Cloud affords from physical machines and the benefits of instant spin up (and down) of new machines. There are very few “box hugging” ops folks left who really care about the specific server on which their workload is executed.

Edge is the next evolution of the Cloud abstraction. Not only should engineers no longer care on which server their workload runs, but they also should not care in which location it runs. They should really only care that their workload runs in the most performant location for their specific cost and compliance parameters.

Can We Address the Challenges?

The key challenges of Edge are both industry-wide and user-specific.

Developing a truly successful Edge fabric to meet the needs of the 2030 Internet will be dependent on certain levels of cooperation and standardization. Closed networks of proprietary software (e.g. the legacy CDNs Fastly, Akamai, Cloudflare, etc) which served the Internet from 2000 until now will not be able to solve for the hundreds of thousands of Edge locations and truly custom workloads that engineers will need to run in those locations. Work underway by collaborative open standards bodies such as the Open Networking Foundation (ONF) and Multi-access Edge Computing (MEC) are meeting this challenge head-on to solve for a truly successful Edge fabric.

The Edge should be both widely distributed and highly fluid, yet we need to present that flexible distributed Edge compute fabric as a “simple to approach” compute infrastructure for both dev and ops engineers. We need to provide them with the same level of familiarity and control that the Cloud ultimately had to provide to entice ops engineers away from their racked servers.

Our Vision for a Developer-Empowered Edge

From day one at Section, we believed we would improve the Internet by providing engineers with access to and control of workloads at the Edge. It was a few years ago that we dreamed up this vision, and ‘the Edge’ wasn’t really a thing back then. So, we didn’t use those exact words, but pretty damn close.

We have also stuck by three guiding principles in this process from day one: Open, Control, Easy. The framework we provide (and the Edge) should be Open. Engineers should have real Control over that Edge, and we should make it Easy for them to use. Simple, huh!

So our vision of the Edge is one where Engineers can run:

Any Workload: Container, serverless, best-of-breed solutions…

Anywhere: Access the most optimal Edge locations for your application

  • Vendor-neutral Composable Edge Cloud:
    • Infrastructure Edge
    • Telco Edge (Exchange, Headend, 5G, etc.)
    • On-Premise Edge
  • Highly Performant
  • Intelligent Scalability: Spin up and tear down edge compute based on demand
  • Security: Segregation, separation, and defense in depth

DevOps-Centric Contol

  • Normal application development lifecycles
  • Observability: Real-time, comprehensive diagnostics framework

The Honest Assessment

We are currently investing on a daily basis to deliver our vision of the Edge. Is this the right time and the right direction and will all of this work lead to materially improving the Internet? The truth is, we don’t know yet.

Yes, the analysts and industry boffins are now strongly articulating our day-one vision, and while this does feel great, it’s not proof. On the other hand, we have really been enjoying working with customers and partners on a day-to-day basis to help them deliver superior experiences for their end-users. Engineers are seeking better ways to deliver faster, more secure and more scalable applications.

Overall, we now believe more strongly than ever, that if we give engineers access to and control of workloads at the Edge, make decisions that continue to be rooted in innovative developer experiences, and we execute execute execute, we just might be sitting across from Guy Raz answering that question, “How much of your success can be attributed to hard work and determination, and how much to luck?”

Similar Articles