How to make oncall great on your team.

Being on-call for software can suck. It seems like everyone has a horror story of being woken up at 3am for an outage in their service, and then having to work until daylight to get things working again. I have been on an on-call rotation for about two years now and it has gone very well. We typically get paged during off hours once per week and resolving a page usually takes under three hours. Here are some thoughts on what makes an on-call experience great. 

Page Frequency 

The biggest thing in my opinion is that the page frequency is on the lower end at 1-2 off hours pages per week. Off hours just means outside of the typical business hours of 9-5. A wake up would be getting paged when you were asleep in the evening. Having a low rate of pages is important because it means your on-call can get adequate rest during the week. In the worst case scenario where your on-call gets woken up twice during a week long on-call shift, they will still be operating reasonably well by week end. Page frequency is especially important if you on-call rotation is smaller. 

Rotation size

Oncall rotation size is important. You need people to spread the load around. I’ve worked in rotations ranging from 3-11 people and 5-10 is the sweet spot. At that size you have about a month off between on-call shifts. Beyond 10 people and you will start to get rusty since people are only on-call about every 2 months. Having more people also makes it easier to support vacations without anyone feeling like they didn’t get a break from being on-call. Smaller rotations are bad because the engineers on the team will not get enough time between shifts to complete project work. Feature development will stall and your team basically becomes an ops team. Additionally, the bus factor is too low in a small rotation, if one person goes on vacation and the other has a power outage you might ended up with no one to respond to a page. 

Clear duties

On my team we have a clear list of things we do in response to a page. Anything else will be left for business hours. 

The things we do are;

1.  Scale up or down the fleet

2. Turn on or off a feature toggle

3. Rollback a deployment 

4*. Rollforward a fix

Note that rolling forward or patching production are the last resort. Making a code change is the slowest way to address an outage and the highest risk. Whenever possible you want to make code changes during normal office hours. 

Good Runbooks

Having good runbooks reduces the cognitive load when dealing with a service outage. They can also save significant amounts of time to fix common problems just by having the steps taken to fix last time recorded. In your regular on-call shift review meeting its best to add new entries to the runbook to cover pages during that week 

How to know your oncall team is in a good place?

  1. If it is easy for people on the team to get someone to cover for them when they go on vacation.
  2. People on the team volunteer to be oncall for peak events 
  3. People don’t complain about being oncall during 1 on 1 reviews

January Links

Scientists Say You Can Cancel the Noise but Keep Your Window Open

They will integrate these speakers into windows/walls and make it smaller

Concept of ‘feature store’ for typed ML model inputs (tensors, vectors, etc)

https://www.logicalclocks.com/blog/feature-store-vs-data-warehouse

VM performance tests, very good blog series.

https://tratt.net/laurie/blog/entries/why_arent_more_users_more_happy_with_our_vms_part_1.html
https://tratt.net/laurie/blog/entries/why_arent_more_users_more_happy_with_our_vms_part_2.html

Compressing for pub/sub results in great savings.

https://blog.lawrencejones.dev/compress-everything/

The meta programming problem with functional programming in software leviathans.

Few of the software leviathans are built in functional languages. Facebook uses PHP/Hack, Google Java, C++, Amazon Java, Netflix Java. The common consensus about functional languages is that they provide large benefits over object oriented and procedural languages like Java. One particular claim is that functional languages like Haskell can do the same work in 1/10th the lines of code. If functional languages really are better we would expect to see the big tech companies investing heavily in adopting functional languages. We might even expect them to create a functional programming language just for their use case, but instead Google created Go possibly the least functional programming language created in the 21st century. What is going on here? Why aren’t functional programming languages being adopted in the biggest software systems on the planet? 

People have argued that inertia is the explanation for the low adoption of functional programming languages in massive software projects, but I think the evidence is in the opposite direction. Google created an entire new language that was intentionally less functional than Java. Facebook started on PHP and then extended that language into Hack. They could have used that energy to completely adopt Haskell. 

My suspicion is that the real reason functional languages are not used in massive software leviathans is meta-programming. Meta-programming enables software developers to create custom domain specific languages, literally adding new programming syntax and expressions to the code base. This is an incredible power and can make a lot of problems much easier. But meta-programming does not scale.  

In a software project with 10,000 software engineers. At this scale the limiting factor is not our ability to write clean and concise code. The main issue is understanding the effects of changes to the code base. A change might take a month to research before changing 500 lines of code. Not doing your research upfront more likely then not will result in you starting the project than realizing 2 weeks in that your approach will never work. Then having to start over. 

Meta-programming falls under the set of programming constructs that are easier to write than they are to read. This is true for all code of course, but in large code bases reading Golang code is reliably easier than reading Lisp code. 

In a algorithmic metaphor, Golang code complexity scales at O(n^2) vs Lisp code scaling at O(n^3). 

Software Leviathans strain on the programming job market.

Why I’m not worried about H1B, outsourcing or remote work.

Software leviathans dominate the market due to diseconomies of scale. Leviathans are a bit of a self-fulfilling prophesy. You create a thing like Facebook and it starts to take off. Then you find a way to make money off of it. Then due to marginal costs you end up hiring 10,000 engineers to maximize the value of ads on facebook. 

Software that is valuable gets bigger over time. Due to diseconomies of scale it gets even more expensive to maintain. But counteracting these diseconomies of scale are the natural monopolies like Facebook, which solve the problem by pouring more money into it. Hiring the absolute best programmers to fight the information problem back a little longer. 

Leviathans drive demand for the best programmers. And importantly that demand is far above the number of engineers at the peak of skill. This demand has so far created a bifurcation in the job market with FAANG salaries and restricted stock options surging ahead of pay in the rest of the market. The bifurcation has persisted over the last decade despite FAANG opening offices in India and China, H1B visas and the surge in new Computer Science majors entering the market. 

Now in 2020 the big shock is remote work. We just spent the last year working remotely. Lots of accountants are thinking to themselves, “Why are we paying San Fransisco salaries when we could be paying less than half that anywhere else on the planet?” 

We are moving towards a ‘Remote first’ programming market. Where anyone in the correct timezones can fill any role at a top tier company. This should reduce compensation a bit since the cost of real estate in a few cities has been a major driver of FAANG salaries. But it won’t change the fundamental problem which is diseconomies of scale in software. FAANG and other big tech companies will still pay higher compensation than everyone else. The terms will just be a little different, instead of making 500k in Seattle senior engineers will make $200k anywhere they want to live. 

You might think this is a bad thing for software engineers since we will be getting paid less overall. But that misses two important factors the first being land costs. Not everyone wants to live in San Fransisco, San Jose, Seattle and New York. I for one would never have moved to Seattle if I wasn’t promised nearly double what I was making in Denver at the time. 

The second factor is that remote work is not going to be a software engineer only change. Most other white collar desk jobs can also be done remotely. Which means they will also see a drop in compensation. Lawyers don’t really need to do anything in person, they certainly managed to keep working through the pandemic. Why hire an expensive law firm in Atlanta when you can get the same remote lawyer based in Montana for one fifth of the price? 

Remote work reduces the locality of labor. This will result in labor prices globalizing. Programmers salaries will become more consistent across the globe. At the same time diseconomies of scale and the sheer demand for software will act to keep programming demand high. 

But other industries that do not have the same level of demand as software will also see their compensation globalize. This will most easily be seen in a reduction in the price of white collar services globally. 

The future looks extremely deflationary to me. The prices of white collar labor will drop due to remote work and the price of manual labor will drop due to automation. The people who come out on top will be white collar workers who live in low cost countries and the owners of capital. 

Software Leviathans and the weird dominance of good enough.

One day in Spring 1989, I was sitting out on the Lucid porch with some of the hackers, and someone asked me why I thought people believed C and Unix were better than Lisp. I jokingly answered, “because, well, worse is better.” We laughed over it for a while as I tried to make up an argument for why something clearly lousy could be good.

https://www.dreamsongs.com/WorseIsBetter.html

It has long been wondered why Java took the crown for the ‘enterprise’ language. I can’t really argue on that topic since I came onto the scene long after Java was all there was. This article is about why software leviathans are written in Java more than anything else. 

You have a huge software project to build. What language do you build it in? The prototype was written in ruby on rails by one guy and an Adderal prescription. Now they want you to scale this thing to a 1000+ engineers over 5 years of development. You might think “aha this is my chance, lets save an order of magnitude lines of code and use lisp”, except this story happened in the past and they chose Java. 

Why is it always Java? Sure it’s reasonably fast, but Facebook made PHP work, can’t we at least use Haskell? Since we have the benefit of hindsight, we know that most of the biggest software systems are built in Java. Google built so many leviathans in Java that they bankrolled a new language like Java but with less features. Amazon is based on Java. Netflix is java again. Facebook made their own language and Microsoft is old enough to have existed before Java, but still made their own version of Java, C#. 

The real question should be, “What is Java’s secret?”. 

Java requires a lot of boiler plate

Java just plays well with the major constraints in a software leviathan and at Leviathan scale that is all that matters. 

This one is the corollary of “Java doesn’t support meta-programming”. Creating your own DSL is great, 1000 engineers creating their own DSL is 999 nightmares. Software Leviathans are too big for any one engineering team to understand. Any DSL you create makes your code unintelligible to the rest of the people working in hell with you. I can understand boilerplate written by a monkey, but a DSL written by another software engineer could take me days to understand. When your team gets poached to go work on a startup where the code base isn’t humongous, it’s a lot easier to bring in Java programmers to replace you lot than it would be to get Haskell engineers to figure out your undocumented dialect. 

Google got to the point where they figured Java had too much meta-programming ability so they created Go which is basically Java without inheritance. That is what happens when you work in a leviathan project. You begin to resent the ability of your peers to do anything unusual, because you know it’s just going to be more work for you. 

Adding more onboarding time to understand 1) the functional language and 2) the DSL your team created might push our already long 6 month on-boarding period closer to the 1 year mark. I wrote an article about onboarding time and functional languages aimed at startups, but honestly I don’t think the hiring market is the real reason Java dominates the top end. FAANG is already willing to train new grads to work on their giant software projects. 

It boils down to comprehension honestly. Humans can only comprehend so many things and at leviathan scale the max is a tiny fraction of the entire system. 

In a software leviathan your team constantly works with other teams’ systems. How does this API work? There isn’t any documentation and one 30 minute office hours isn’t going to explain that hair ball. If you all use the same language and that language is Java there is a chance you can open up their code base and figure out what is going on. They probably didn’t do anything you wouldn’t expect like pre-allocating all of their memory and storing all objects into a ring buffer. But if they did do something crazy you can probably figure it out. Besides Java doesn’t have anything like Scalaz so you won’t be surprised by a functor where you weren’t expecting it. 

Lets take the opposite side, away team work. You have been given the glorious task of implementing a new feature. But it’s impossible to do it cleanly without an API change in another team’s system. That team fully supports the change and has contributed 2 paragraphs to your architecture document describing the change to make in their system. But the change isn’t on their team’s roadmap so you are going to have to do it. 

Getting their service to run and pass integration tests in your virtual development machine takes a week. Now you need to navigate their system where they have conveniently used dependency injection to ensure that you can’t know which of the 5 implementations of this interface is in play. Do you still wish the other team could use Clojure? You might never figure out the DSL. 

Have you ever looked at somebody else’s Lisp code and wondered what was inside the variables? Now imagine this is your job and you will spend the next month making a 200 line change to a 100,000 line of code API service you didn’t know existed until this week. Except this will happen every quarter for the rest of your career. 

People complain about how Java forces you to write the type of things everywhere, but for software leviathans this is a benefit. I can see helpful type signatures everywhere, whether I’m reading your code in my IDE, an email, an excerpt in an arch doc, or in a Slack message you sent me at 3 am. 

Java and Go are great in Software Leviathans. You don’t have to worry about stumbling upon a programming mystery created 10 years ago by a disgruntled new grad. You can expect a consistent syntax and language whichever microservice you are working on. The code has self-documenting types that are ‘easy’ to understand. Honestly, they are a lot of benefits which make a tough coding environment a little more manageable.