Peak to Trough

The importance of auto-scaling 

peak to trough traffic

The cloud enables us to acquire hardware on demand for our services. I have never had to rack a server or worry about hardware failure. My entire software career has been in the cloud. As an industry most of us don’t need to worry about forecasting hardware requirements months in advance. We just increase the number of virtual machines we need in the PAAS dashboard. 

This week I was investigating some unusually large peaks in our daily traffic. I was changing the bounds and timeline of the graph and noticed that we had a 10x difference peak to trough. Usage peaks for about 2 hours each day at 10x trough, about 6 hours are also peak but at 5x trough. At night our traffic drops significantly because our users are sleeping. 

My current team, like all teams I have worked with in my five year career, does not use auto-scaling. We experimented with it last year but had issues with auto-scaling interfering with our deployments in unpredictable ways. 

So we scale for our instantaneous peak of 10x our lowest traffic around 2am. Meaning we use at least 5x as much hardware as necessary. 

The drawing underestimates the impact of the instantaneous peaks which essentially double the traffic to this service. 

Auto-Scaling would be a great fit for this service. Most cloud platforms have supported this use case for years and would result in decent savings. 

If you like my writing, please buy my book on Amazon.
The Sledgeworx Guide to Getting into Software

Designing your development environment

What should your development environment look like? People talk about how its hard to setup development environments, this or that component are tricky etc. 

But what components should you actually have? For a backend API server in a major language you will probably have an application that serves requests while writing to a database, logging and emitting metrics. 

If you follow best practices you will have unit tests, a deployment pipeline, integration and end-to-end tests. All that stuff is great, but what about the development environment. 

You should have a best in class IDE. Whether that is Visual Studio, IntelliJ or Emacs make sure you have the core features syntax highlighting, go-to-definition, safe renaming, and a debugger. 

You want some way to run integration tests on a local developer’s machine. In my workplace we ssh into VMs which we can run the entire development stack on. If you can manage that without the VM, do it, its 10x better. 

You want to be able to run a remote debugger against a fully running version of your application. Ideally, you should be able to test manually end-to-end against a version of your application running on your local machine. 

If you have only one service this is easy. If your Microservice is 1 of a 100, making that happen is tricker but worth it. 


Code coverage should be verified during your build. 

Code style linters should automatically apply stying fixes. The build should never break over styling issues. 

Builds should be FAST. Every minute of build time can safely be assumed to result in wasted developer time. The ideal build time is under 15 seconds, including unit tests. 

The longer your build time the more distractions will edge into your workflow. If builds take a minute or two devs will click to look at Slack or their browser. If builds take over 5 minutes devs will be talking to their coworkers and getting distracted. If builds take 15+ minutes its bad. That means less than 4 code changes can be verified per hour. 

If you like my writing, please buy my book on Amazon.
The Sledgeworx Guide to Getting into Software

Story development

It’s a common statement that once you are a senior engineer you don’t get to code anymore. It’s not that senior engineers are forbidden from coding, it’s still on the job description.

But senior engineers get pulled into so many tasks they rarely have time for coding. 

A senior engineers might get pulled into a critical outage, a roadmap meeting, defending architectural boundaries from other teams, assisting team members with their tasks, reviewing code, coordinating large projects with other teams. 

None of those tasks involve coding on the part of a senior engineer. And none of those tasks involve story development.

Story development is the process of taking feature requests and refining them into technical tasks.

Unless your team is stacked with experienced engineers or in a realm with little domain knowledge, story development will fall on the senior engineer.

Maintaining a ‘sprint ready’ backlog for a team of 10 engineers takes more than a 1 hour meeting once a week.

My philosophy is that, as the senior person, I should prioritize the tasks that allow the other nine people on the team to work efficiently. If the backlog is full of two sentence feature requests, the next sprint is going to be full of junior engineers figuring out the requirements. 

Don’t ignore the backlog to fight fires. Figure out what it will take to empower the non-senior part of the team to fight the fires. Then you can focus on the hire value tasks. Building the roadmap, evolving the architecture and developing stories. 

People have given up on performance in favor of Scalability

Scalability has been all the rage since the cloud made horizontal scaling easy. No longer do we have to order parts, lease colo space or rack servers. Instead there is an infinite supply of Virtual Machines out there we can rent at the press of a button. Because of this there is a tendency to start development with an architecture that will scale well horizontally. My entire career has been during the post AWS period. Pre-mature optimization is the root of all evil, but make sure to create a stateless service so we can scale it up later when its slow.

Web Servers

Its interesting to look at examples of projects that did not focus on scaling horizontally.

For example we have stackexchange’s public numbers on their performance. 

https://stackexchange.com/performance

They claim that they handle up to 450 requests/s on 9 servers. From the infographic it looks like these are 1U or 2U servers with 64GB of RAM and although its unspecified I’m guessing they have 12-24 physical cores per machine. 

These machines have around 10 times as much RAM as the VMs my team runs in production and probably over 10x the cpu performance. They handle more traffic per server with lower CPU utilization. A rough estimate from these numbers is that the stackexchange .NET service is 2.5x to 10x as performant as my Java service. That could just be the bare metal vs Virtual Machine cost since our stack has significantly less CPU. 

You might think that stackexchange is operating at an absurdly low CPU utilization at 5%, but I haven’t seen anyone operating cloud servers above 20% utilization with a sample size of 4 companies. 

Big Data

This study was done comparing single threaded performance on a modern CPU vs distributed big data algorithms. 

Single thread outperformed distributed big data computations on many (most?) problems. 

https://www.usenix.org/system/files/conference/hotos15/hotos15-paper-mcsherry.pdf

They found that optimized single threaded code outperformed distributed code in the datasets they tested. Admittedly, not all datasets will fit on a single machine. But we have to remember a single machine can now have over a TB of RAM and 100s or more TB of SSD. Single threaded performance is clocking over 5GHZ now. A single server can handle all your big data needs until your dataset exceeds dozens of Terabytes. 

I’m working on learning awk to experiment in this area. It is a relatively simple domain specific language for text processing and formatting. 

If you like my writing, please buy my book on Amazon.
The Sledgeworx Guide to Getting into Software