Software Leviathans strain on the programming job market.

Why I’m not worried about H1B, outsourcing or remote work.

Software leviathans dominate the market due to diseconomies of scale. Leviathans are a bit of a self-fulfilling prophesy. You create a thing like Facebook and it starts to take off. Then you find a way to make money off of it. Then due to marginal costs you end up hiring 10,000 engineers to maximize the value of ads on facebook. 

Software that is valuable gets bigger over time. Due to diseconomies of scale it gets even more expensive to maintain. But counteracting these diseconomies of scale are the natural monopolies like Facebook, which solve the problem by pouring more money into it. Hiring the absolute best programmers to fight the information problem back a little longer. 

Leviathans drive demand for the best programmers. And importantly that demand is far above the number of engineers at the peak of skill. This demand has so far created a bifurcation in the job market with FAANG salaries and restricted stock options surging ahead of pay in the rest of the market. The bifurcation has persisted over the last decade despite FAANG opening offices in India and China, H1B visas and the surge in new Computer Science majors entering the market. 

Now in 2020 the big shock is remote work. We just spent the last year working remotely. Lots of accountants are thinking to themselves, “Why are we paying San Fransisco salaries when we could be paying less than half that anywhere else on the planet?” 

We are moving towards a ‘Remote first’ programming market. Where anyone in the correct timezones can fill any role at a top tier company. This should reduce compensation a bit since the cost of real estate in a few cities has been a major driver of FAANG salaries. But it won’t change the fundamental problem which is diseconomies of scale in software. FAANG and other big tech companies will still pay higher compensation than everyone else. The terms will just be a little different, instead of making 500k in Seattle senior engineers will make $200k anywhere they want to live. 

You might think this is a bad thing for software engineers since we will be getting paid less overall. But that misses two important factors the first being land costs. Not everyone wants to live in San Fransisco, San Jose, Seattle and New York. I for one would never have moved to Seattle if I wasn’t promised nearly double what I was making in Denver at the time. 

The second factor is that remote work is not going to be a software engineer only change. Most other white collar desk jobs can also be done remotely. Which means they will also see a drop in compensation. Lawyers don’t really need to do anything in person, they certainly managed to keep working through the pandemic. Why hire an expensive law firm in Atlanta when you can get the same remote lawyer based in Montana for one fifth of the price? 

Remote work reduces the locality of labor. This will result in labor prices globalizing. Programmers salaries will become more consistent across the globe. At the same time diseconomies of scale and the sheer demand for software will act to keep programming demand high. 

But other industries that do not have the same level of demand as software will also see their compensation globalize. This will most easily be seen in a reduction in the price of white collar services globally. 

The future looks extremely deflationary to me. The prices of white collar labor will drop due to remote work and the price of manual labor will drop due to automation. The people who come out on top will be white collar workers who live in low cost countries and the owners of capital. 

Software Leviathans

Dis-economies of scale, why FAANG pays high salaries, the dominance of Java

The top end of software engineering jobs are dominated by what I’ve started thinking of as ‘Software Leviathans’, large software systems that are staffed by thousands of engineers. A few that come to mind are Amazon Alexa, Amazon.com, Google Search, Salesforce, Facebook.com. These are not “monoliths’ or large services that do everything. Instead they are the result of combining 100s of smaller ‘micro-services’ into one massive software product. 

These leviathans do many many things, few people on the planet can claim to know all of the features of facebook.com. It is quite possible that there exists no single list that enumerates every feature in that product. 

Similarly, development on these systems happens in parallel across many teams. It it is essentially impossible for any one person to keep track of everything that is being added to the system. 

Leviathans are too big for anyone to understand. It doesn’t matter what architecture or runtime choices are made. It could be one massive JVM, a million lambda functions, a hundred thousand docker containers or thousands of micro-services. Even if you work on the leviathan, you won’t have any real understanding of the total state of the system. Each engineer will be aware of and communicate with a tiny fraction of the total number of people working inside the leviathan. 

Leviathans are heterogeneous systems. The do not do ‘one thing well’. Leviathans do everything you can think of. Google.com is a search engine, but it’s also a calculator, an advertising system, a web scraper, a hotel booking tool, a flight booking tool, and many more. Leviathans grow in parallel, across myriad tentacles of functionality. New features emerge all the time usually to the surprise of other engineers on the project. 

Leviathans are difficult to work in. Despite appearing to be a sea of constant change from the outside. Any change made inside the Leviathan is extremely expensive in engineering hours. There are thousands of potential interactions each engineering team has to consider when evaluating changes to their system. The architecture must be constrained heavily to support parallel development in environments where coordination between different teams is impossible due to scale. Engineers working on a software leviathan spend a relatively small fraction of their time actually writing code as compared to debugging issues, research, coordinating changes, and documenting. 

Leviathans are interesting because they are the ‘core’ services powering the digital world these days. Their scale is at top of the chart in the software engineering world and as a result they expose the limitations of software engineering. 

Software diseconomies of scale are at their most evident in these software leviathans. They are massive projects with huge numbers of the best engineers working on them. But development is slow per engineer and code quality is not clearly superior to industry best practices. 

Don’t use internal tooling, contribute to your tools.

There is a dichotomy in software engineering organizations, some only use public tooling, entirely avoiding building their owns tools. Other companies  follow the ‘Not invented here’ principle and try to only use software developed internally. 

There are two forces driving this split, first building software tooling is expensive, small companies often cannot afford to build and support internal tools. It is an expensive recurring cost that can easily get out of hand. Building 2 internal tools a year will set you on a path to supporting 20 tools 10 years from now eating huge chunks of your operations budget. 

The other force in play is that if you use public or off the shelf tooling you will encounter workflow discontinuities that are difficult to fix. Using off-the-shelf tool A with tool B might require an entire employee to bridge the gaps manually, while still be a pain for everyone involved. Decision makers at some companies think to themselves, “We can just build a tool B that works perfectly with our use case.” And that works great when you have one tool, but then when need C comes along your team  makes the same case again, “We already invested in custom tool B we can’t throw away that work, we need a new custom tool for need C”. And now your company is on the path towards building an alphabet’s worth of internal tools that aren’t useful outside of your business.

Luckily, there is a solution to this. Use Open Source tooling and when you run into a workflow problem, work with the maintainers to contribute a fix. Even in poorly managed projects extending the code to support your use case will be less costly than building an entirely new internal tool to solve the same problem. 

Software is a depreciating Good

“If customers are paying for the work, presumably by the hour, then you can’t expect them to pay for weeks (or more) of work every few years just to arrive back at the same product they started with.”
– From reddit user /u/PragmaticFinance

People have the idea that because the software has the same features as it did five years ago, it still is just as ‘good’ as it was five years ago. And if we can keep running it without modifications that is a reasonable practice. Unfortunately, while it feels intuitive that is not the reality with software. 

Sorry, a software product that is built on out of date tooling is strictly inferior to software built on up-to-date tooling. It is not ‘just as good’ it is significantly worse. The out of date software has increased vulnerability to security defects. Python 2.7 doesn’t get security patches anymore, if your software is built on it and a zero day is discovered. Your software could be unoperable for weeks while engineers upgrade it to function on Python 3. 

Software that can’t be patched to address security vulnerabilities in a reasonable time frame is strictly worse than up to date software. Anyone who was evaluating a purchase of these two programs would consider the expense of making changes. 

Aside from security issues, out of date software is harder to modify. What are the banks going to do when the last Cobol programmer dies? They will have to fund Cobol bootcamps which is not going to be cheap.

What are companies that had ‘feature complete’ software that didn’t need to change in years doing now that GDPR is a thing? That software is getting dusted off and updated or entirely replaced.

Roads and tractors require constant maintenance just to keep doing the same job. Software is the same. Maintenance costs should be estimated and included in the total cost of ownership for software. 

A lot of business software just encodes business processes. It can get out of date because the world shifted and the process changed, or the world shifted and the way the process is encoded is out of date.

Each time I write one of these technical debt posts it helps me understand why Software As A Service took over in such a big way. If you needed this year’s model of tractor every year you would lease it.

2020 Age of the remote conference

At this point, in person conferences with 1000s of attendees are done for the year. Lockdowns are easing in general, but most people won’t be comfortable going to a massive conference with 1000s of people from all over the world anytime soon.

We are likely to be disrupted by this pandemic until spring 2021, by when travel is hopefully back to normal. I’ve been working from home for months now and will be doing so officially until October. 

The thing is conferences bring a lot of value to engineering. It is a great way to keep up with what people are doing in industry and to share your experience. I’ve attended some great conferences and enjoyed diving into the technical depth available at a conference devoted to Spark or Kubernetes. 

For 2020 these kinds of tech talks and presentations have to move online to Youtube and Zoom just like our work has. Fortunately, there are some benefits to running a tech conference remotely. 

A remote conference can be a lot more affordable. You don’t need to rent a big conference venue to host the talks, and attendees save money by not flying to a different city or renting hotel rooms. 

You also can save time because you aren’t traveling. Just like we don’t need to commute to the office, you don’t need to travel to the conference. The conference is wherever you are. 

The technology is also good for presentations and Q&A sessions. Teleconferencing shines in situations where you want to share screens and only a few people need to speak at once. Where it really breaks down is when you want to have a group discussion, it is much harder to interleave speakers. But if we focus on activities with a single presenter or a Q&A where someone asks a question, then stops talking, teleconferencing is almost as good as being in person. 

I think in 2020 we will see a lot of remote mini-conferences. They are really economical to host and the timing couldn’t be better. I like the idea so much I have decided to adopt the TinyConf I was planning for this year into a remote conference.