KIS (Keep it Stupid) as a Design Philosophy 

On my last project as an architect, I found myself drifting into an architecture strategy of keeping everything as simple as possible. In choosing a database I would look at our projected usage, multiply the number by ten and evaluate which database would be the simplest given our requirements. Given the size of an external library is it better to import it as a black box or to write the functionality as a single file module? The project being in Go made a few of the choices easier because Go does not offer many higher level features. Instead of worrying about class hierarchies you just write the functionality you need. 

Keeping it stupid means choosing an architecture that anyone would understand at first look. Other people should always assume that they could have come up with the architecture when they read it. There shouldn’t be any surprises or ‘cool’ technology choices. There is a blog post out there about how a startup or software team can afford to pick one risky technology before going bust. In KIS architecture you never pick a risky technology. 

In the 2018 Software Engineer’s toolbox are dozens of technologies that are both proven and incredibly powerful. We have access to many of the common tools as managed services in the cloud making things even easier. Save yourself the trouble, keep your architecture stupid until its proven that it needs to be more complicated.

It took me over a year to find my next job

I am moving to a new job in a different state. While my job search ended up being much longer than I expected, I learned a lot about interviewing and the salary ranges in Denver.  I ended up with 2 competing offers at the same time and managed to get a look at the maximum amount each company was willing to pay me. 

I started looking for a new job after the company I work for was acquired in 2017. I did this mainly in waves of 5~ applications with around a 50% response rate. Some companies got back to me right away, others took 3 months before starting the interview process. I did around 3 serious waves of applications and in total applied to 20 different companies. Considering that I applied to over 50 jobs before getting each of my internships this was a pretty good rate. Still it took over a year and thousands of dollars worth of vacation time. On the positive side my income will increase into the 6 figure range.

Part of why it was easier this time is that I have 3+ years of professional experience so I am not competing with new grads or junior engineers for jobs anymore. Additionally, my last promotion put me into a line management position where I was overseeing the work of others and led the project, which made behavioral questions much easier. The biggest difference here was the response rate. As a college student my response rate was 10% or less, now it is over 50%. 

Because my response rate was so much higher during this job search, I ended up doing a lot more actual interviews than when I was looking for internships. While time consuming, interviewing is probably the best way to practice interviewing. I have done enough interviews now that they start to flow together. The questions are all pretty similar. Whiteboarding has always been pretty natural for me because of my college background, but the questions repeat too. The last in person interview I did was 2 rounds of white boarding and a couple non-technical ones. But the questions were easy and I was almost bored. Luckily the people I was interviewing with were fun and it was a blast. 

It was also interesting to get a feel for the company tiers and what the technical level of my local area is. Most of the companies here in Denver ask technical questions of around the same difficulty. The hardest of which are Leetcode easy questions. I have only run into Leetcode medium questions from companies like Google or Uber. 

I have done around 20 Leetcode questions in total, which could be accomplished in a couple weeks. Interviewers want to see how you solve problems not how you already knew the solution to the problem. If the company is known for giving hard level problems you want to be totally solid on medium level problems and have done a bunch of hard problems. But grinding hard problems for months is not necessarily a good use of your time. Unless you must get into Jane Street. But I think focusing on just one company is probably bad for your career and mental health. 

While my job search took a long time and cost me some money, the increase to my income is more than enough to make up for it. I also got a chance to take advantage of Josh Doody’s Fearless Salary negotiation. Paying $50 to get an extra $5000 each time you switch jobs is worth it.

The rule of switching jobs every 3 years early on in your career seems to have held true in my case. Even if I had decided not to relocate, my income would have received a solid boost.

 

Specialized tools can be 10x as good for the job.

Often to save money or for convenience people will buy a multitool or generalized tool instead of a specialized tool. Depending on the use case there might be a specialized tool that is only useful for that one thing. One example is the fingertip bandage, it is shaped like a butterfly which makes the bandage less useful for most scrapes. But if you have an injury on your fingertip its hard to beat a fingertip bandage. They stay on very nicely and fit well. 

This concept extends to specialist engineers if your project can support it you want to hire specialists that focus precisely on subsets of your project. You want to hire a frontend engineer, a backend engineer and a data base expert. The issue is that you often cannot find specialists for every component, or there are many small areas that are too small to hire a single specialist for. This is when most companies turn to generalists. 

Specialists are expensive and only want to work on one sort of problem. Generalists are cheaper and will work on anything you want them to. As a result many companies slowly become staffed by generalists except for a few areas where the company itself is specialized. After a while, your company may consist almost entirely of generalists who are used to working on things that they do not know a lot about. The issue is that this effects your company culture and the ideas your company has on how software engineering works. 

A specialist will be working at 100% on day one of a project in their specialty. A generalist replacing a specialist you didn’t want to hire will take months to get to 80% of the productivity that specialist would have had on day one. If your company consists of mainly generalists, most companies do, you may not realize how much efficiency you are giving up by not having specialists. 

SledgeCast: Merging K sorted Lists

In our latest set of Sledgecasts I work through the problem of merging k sorted lists. I eventually converged on the optimal solution except I did not perform it in place. Merging linked lists can be done in place without any allocation which can make a big difference if you are allocating an entirely new list each time you merge.

 

 

Jenkins is showing its age

Jenkins has been a stalwart aid in all of the CI/CD projects I have worked on. Jenkins’ ease of use and plethora of plugins make it useful for almost any project and situation. From a build tool to a job scheduler Jenkins can solve your problem. But as with any good tool Jenkins has its drawbacks. For Jenkins the issue that gets me is that configuration is not stored in a git compatible format.  If I change a job configuration there is no way to rollback the change or even determine what the previous state was. If multiple teams have access to Jenkins your build server could go down with no means of rolling back a change on the whim of another team’s wannabe system administrator. 

Jenkins Pipelines are the Jenkins solution to storing job configuration in source control, but that is only for job configuration. Plugin configuration is still updated manually and easy to lose. It does make me wonder if we could make the /var/jenkins folder a git repository and just ignore the job history directories. 

While Jenkins pipelines are nice, I don’t want to code my entire build. I just want my changes stored in a git compatible format so I can easily roll them back. 

I also want to be able to mix and match pipeline stages with a standard stage interface for artifacts. Jenkins Pipelines allow me to mix and match ‘functions’ from a shared library, but not stages. Spinnaker purports to handle pipelines well, but it require me to run 10 different micro-services just to get started. That is not a replacement for Jenkins, maybe if Spinnaker was hosted by a 3rd party, it would be convenient enough, but Jenkins is free to setup. 

I will be using Jenkins today, but tomorrow I will have to start looking at alternatives.