Making Hard Things Easy is Hard Sections Kurt Rinehart Speaks with the Data on Kubernetes Community
Making complicated things easy has long been a prime objective of technological innovation. Let’s imagine, for example, you’ve launched a new application that finally addresses a perennial business problem. User adoption is skyrocketing. You’ve made it, or you’re at least well on your way. But as user growth increases and expands, have you addressed the location challenge?
The most common response to this hypothetical question likely speaks to the fact that you’re relying on one of the hyperscalers to host the app with the goal of ensuring high availability and, presumably, high performance. But have you considered where you really want to deploy that app? More specifically, have you thought about where that app should be running at any given moment in time?
The answer is almost always some variation of, “Well, I want the best performance and I want it at the lowest cost.” However, these two expectations are in direct conflict with one another. If you want the best performance, you’ll typically need to spend increasingly larger amounts of money to deploy that app in more and more data centers. But if you want the lowest cost, then you’ll need to pick just one or a handful of facilities – and your performance will suffer to the tune of higher latencies for your customers around the globe.
These are just a few of the challenges and trade-offs that Kurt Rinehart, Section’s Director of Information Engineering, recently discussed when he joined Bart Farrell for a Data on Kubernetes Community Talks event. Kurt points to the need for intelligent systems, driven by business goals and strategies, that still maintain simplicity, flexibility and control for DevOps teams. He explains how Section helps users address these challenges with its Adaptive Edge Engine, which continuously tunes and reconfigures your edge delivery network to ensure your workloads are running the optimal compute for your application.
When it comes to trade-offs, Kurt describes the all-too-common scenario where, in order to ensure a high degree of performance, you’re likely running the application across all of your data centers 24/7/365. It’s understood that you’re going to pay a significant amount of money to host your app in regions where traffic is low because your users are asleep. This trade-off is accepted because developers and engineers shouldn’t have to manage the ongoing optimization of increasingly complex distribution strategies. But that is why we should instead take a computational approach to meet real time traffic demands.
Check out the full episode and hear more from Kurt about how Section is solving the location challenge by optimizing hosting for proximity and cost – dynamically and in real-time – giving users the benefits of distributed applications and services without the complexities of having to manage them.