Interview with Section CEO Stewart McGrath on JetRails Podcast

Section CEO Stewart McGrath recently sat down with Robert Rand on the JetRails podcast to chat about Helping Your Website Live On The Edge. The discussion covers everything from how Section got its name, to how we think about Edge and its application for today’s developers.

This post highlights select snippets from the interview. You can view/listen to the entire episode at The JetRails Podcast.

What is Section?

We’re essentially an organization who help engineers get control of and have access to those workloads in distributed formats beyond the centralized infrastructure.

How did you get the name Section.io?

There’s a little bit of a story behind it. We’ve dropped the .io these days, trying to make things more simple, so we’re just Section. The origins of the name were as usual in most organizations, standing around a whiteboard with a bunch of people who were interested in thinking about what this application meant for the future of the Internet.

Our mission is really about changing how applications work on the Internet, and we were thinking about Section when we were thinking about the volume of traffic that was going to traverse the Internet in the future. We were thinking about that as an oncoming wave, so some of us at Section enjoy surfing and the part of the surfing wave that is often talked about in surfing parlance is the “section” of the wave. You either make the section or you don’t make it. The bits between the broken parts of the wave are probably the most interesting to surf on.

It also has a little bit of a connotation with respect to the container based structure of our platform in that applications are essentially deployed in sections or components on the Section platform at the edge.

What can you tell me from your vantage point as a leader in edge networking? What goes into an edge network? Why have these become so important in the market? Any mystery there that you typically run into as you explain what your company has been able to achieve and why you’re in demand?

It’s possibly a minefield of a question. Before I answer that question, we should probably tackle the big one, which is, “Where is the edge?" We live on a spherical planet and the Internet is a series of interconnected devices in a mesh; therefore the reality is that there probably isn’t an edge to the Internet.

However, when we think about edge, people have come at the problem in the provider and the service provider space and thought about the edge as essentially where their infrastructure is. We like to talk about where the edge is as where application controllers or owners start to cede control of components of their application to other parts of the Internet chain. Again, some of these legacy service providers, whether it’s cloud or CDN or even telcos, will all talk about the edge essentially being where their infrastructure is, as that’s where they have or will seek control of components of an application.

When we think about it from an application owner’s perspective, as an application is delivered from centralized infrastructure right through to an end user, which may be an Internet browser or an application user, there’s actually steps all along the way there where control can be obtained, maintained or ceded subject to the application case. We think about the edge not just as one layer in that delivery chain out to the end user, but more of an edge continuum in that all of those locations are edges to the Internet where certain components of the application can be run, delivered and parts of the control space can be ceded along the way.

Back to your question, how do we think about an edge network or what the Section edge network is?

We think about that edge network as being able to run on any parts of that application delivery chain, whether that be from the origin infrastructure through to the infrastructure layer, cloud, through to running in telco and on telco compute technology, right through to on-premise or even down to the application device itself.

I know you mentioned CDNs, so there’s the content delivery network side of things where instead of pulling information out of one web hosting server, you’ve seeded it across the edge network, so let’s say there are copies of images or other cached files and information, stored across the globe, so there will be a local version that will be delivered very fast in many cases. You have other optimization levels to deliver at the right sizes for the right browsers. What else is happening out there? I know because your team has this containerized solution, there are a lot of different things that you can drop into the solution to go above and beyond just the CDN implications and the optimization that can go with it.

That’s true. We’re reaching a point now where more and more parts of applications are being delivered from distributed infrastructure. You mentioned a couple of those parts of an application, which is delivering images or changing the images on the fly even, which you refer to as image optimization, for better performance in the browser.

We’ve also got application elements such as security layers being delivered from the edge, such as web application firewalls (WAFs), an example where customers are running logic at the edge to prevent spurious attacks on their website. Perhaps running bot mitigation software at the edge to prevent scraping of proprietary information or prevent probing attacks on their websites.

We’re seeing more marketing style application elements being delivered from the edge as opposed to just straight performance and security elements of the application being delivered from the edge. We now have things like server-side multivariate testing being injected into the delivery chain from the edge.

Like a more advanced A/B testing where you can have multivariate so A/B/C all mixed together to find the winning permutations and such of different changes to content or site elements?

Precisely. You can drop that in from server-side as opposed to in the browser, providing some great benefits from a performance and useability perspective. Really all these components are parts of application logic, which are more easily injected at the edge through this container structure we’ve delivered.

Also, it provides a scalability paradigm that engineers really enjoy because they don’t have to think about these components of their application running on their servers and centralized infrastructure. They can focus on running the important parts of the application that need to run on the centralized infrastructure while letting the edge run these other components or other logic elements of the application.

Your team is one of the leaders in Varnish at the edge. That’s been interesting in terms of opportunities there to enhance performance for those users that are so reliant on those caching layers to be able to operate an advanced website in a more performant way because otherwise – without caching layers – Magento is a bit of a brick.

I think that’s fair. We all know Magento loves a bit of caching, and not just image caching but HTML document caching, which becomes a really important thing in terms of scalability and performance. I think that’s the case for most modern applications to be honest. If you think about even static site delivery, WordPress, Drupal, you name it in the CMS landscape…

I think everyone is looking for performance at this point. If we’re not, we’re trying to educate them because if they’re not, it’s impacting their marketing campaigns and their conversion rates and so many metrics that they do care about, they have to be doing things to their website itself to be efficient. There are absolutely things happening in the various hosting layers that are going to be crucial. It’s interesting.

It has been interesting watching this conversation develop over the last ten years in terms of web performance where ten years ago in the web performance conversation, it was more about, does web performance improve conversion? Does it improve optimization in terms of user experience? I think that conversation has changed now to not how does it, but how do we most cost effectively deliver that performance for our application at any scale? We’ve moved on from the, should we to how do we?

We know that a lot of the site owners and higher-level folks involved in these organizations and businesses that need this technology, sometimes it’s a bunch of technobabble and it’s too much. Our clients often trust us to bring the right options to them and that’s important in general. Our goal is to have the right advanced stack so it meets the needs and advances that expectations of our clients.

There are things that your stack in particular is interesting, especially with its alignment in terms of how you look at things. At JetRails, we don’t like a black box. We like businesses to be able to choose the right tech. We want to be able to simplify the management of that and have great components – WAFs, ,tools to block out bots and DDoS attacks, but if no one is monitoring or managing the tools and seeing what’s going on and tamping things down, it doesn’t always go as smoothly as we’d like. There’s the human element to security and these other facets.

Our goal is to be able to say we have a 24/7 NOC, operations center where we can monitor and manage and maintain these things, and handle it for you, but you need to have the right tech. Your team gives us different options of AI-powered WAFs, best-in-class tech that can be stacked together. We don’t have to choose because the CDN is really strong there but these other components that are part of it are maybe not as much.

Basically, you’ve containerized it; that is so interesting. I don’t know if you have a lot of competitors that are doing something similar. For us, by and large, it’s been a really good offering for clients that want to be able to choose the best from column A, B, C and so on. Is that where you differentiate in the market? Is that one of the biggest value propositions of what you’ve created, and the processes and procedures that you’ve put into place?

You’ve definitely hit on something that’s key to how we think about the two major components of Section really: flexibility and control. As you mentioned, because we offer a containerized edge infrastructure, that means we can offer engineers flexibility in what they run at the edge. Yes, we do offer multiple different WAFs on our infrastructure, different image optimization tech, different bot tech, different acceleration tech, virtual waiting rooms, and more. And the opportunity for engineers to bring customized workloads to Section, whether that’s containerized or serverless web load, which we’ll run at the edge for them.

What that means… When I used to buy CDN as a CIO, when I was thinking about going to those content delivery networks, I had to make choices whether I would go with CDN 1, which was stronger in the performance element but weaker in security or CDN 2, which was stronger in security but weaker in performance. I couldn’t have the best of both worlds.

We built Section specifically to provide that sort of flexibility for engineers so they could choose the right software for their application at the right location; for example, some customers choose a higher end web application firewall, but a lower-end image optimization solution, or maybe they don’t even need an image optimization solution, so they choose not to deploy it. There’s great flexibility there.

The other component that is really important for engineers is familiarity and control. Being able to approach those elements with a proper application development lifecycle against the edge. So, we have fully Git-backed workflows sitting behind the Section platform, as well as an API-first mentality.

We are really thinking about this from the application standpoint as opposed to the network standpoint. If you think about a CDN world, it really did spawn from solving a network problem, a networking issue, and what we’re talking about these days are more application style issues. Asking things like how do I get my application to work more effectively for more users, but also in a cost efficient manner so that we can provide the right software, in the right application, at the right time.

How about data and analytics? I know some of your competitors try to provide an amount of information and help users understand how the network is being provided and the tools that are in place are affecting users. Have you found that to be something that’s in demand, that your team has had to invest a lot into addressing or consistently pay attention to?

I think I mentioned earlier one of the elements we need to give engineers and not just developers of applications, but also operations, organizations, maybe DevOps teams depending on how you frame your internal team, need to have is to feel the control of the edge environment. In order to provide control, we need feedback loops so metrics is a really big part of how we think about running the edge.

We log all our requests that pass through our platform and replay that in a searchable logs environment, so ElasticSearch, Kibana, Logstash, are key interfaces in the Section console. We also crunch that up into some metrics so we can some time series information there. That’s available within less than 30 seconds of a request passing through our platform. It’s not technically real-time, but it’s pretty close that engineers and operations teams can find out what’s going on at the edge at all times.

We’re passionate users of diagnostics and information. We encourage surfacing that in our customers’ environments as well. One of the challenges as we think about distributed systems moving forwards and edge compute is being able to visualize what’s happening in a global application environment.

We’ve spent quite a bit of time working with a company called Metalab, a company who designed Slack from the ground up as the UX, working on a new interface that will be released in the Section Console shortly, which is a 3D visualization of global traffic that is useful not just on a day-to-day operations perspective for engineers, but also useful to be able to describe to the CFO, CMO or the CEO in an organization what’s happening in the system at any particular point of time.

I had a great talk with one of our customers who’ve been a customer for a long time – a large car manufacturer – and had been running a security layer and a caching layer on the Section platform. When I was talking through this visualization, an early prototype of it, with the engineering team, they said, ‘Ah, that’s where that layer is running globally’, so it was a real lightbulb moment for these folks to be able to see how and where the distributed system is running in some sort of visualization.

The answer to your question is yes, we’re strong on metrics and very passionate about them, and passionate about engineering teams being able to use those metrics to describe the story of what they’re doing and what wins they’re having with an edge compute platform.

Anything new coming down the pipe or final thoughts you want to share before we close?

There’s plenty new and exciting coming. I mentioned a new DX, developer experience, that is coming down the pipe that is going to be game changing I think in terms of how engineering teams can visualize and communicate around their distributed architecture, i.e. their edge compute. That’s really exciting. I can’t wait to get that into the market and get people testing it.

We’re also spending a fair bit of time thinking about and have built a machine learning structure behind our traffic forecasting to help the accuracy of moving traffic around to the right places at the right time and bringing up infrastructure in the right place at the right time, so we’re seeing some really good results from the neural net we’ve built there. I’m pretty excited to see that in the wild as well.

Similar Articles