LLMs are at the point where they can reduce toil for software engineers across a host of use cases. Here we will explore a few I’ve thought of.
Reduces time spent on toil
- Resize this button -> AI
- Refactor this function -> AI
- Connect these endpoints -> AI
- Write more tests -> AI
- Add more comments -> AI
- Create PlantUML diagrams from a sketch -> AI
- First pass code reviews -> AI
Reduces time spent researching small things
- AI is a better stack overflow
- Examine stack traces
- Easier to write architecture documents
- Faster development of small utilities
What AI still cannot do
- Test if a library will work for your use case
- Respond to outages
- Decide the product direction
- Argue with stakeholders
- Yell at people who want to do stupid things
A few opportunities to reduce operational toil
- AI can review graphs and notice changes
- AI can check if a website is down
- File bug reports
For me the most valuable use of Claude as a coding assistant has been how it makes getting started much easier. Usually, to program a swift game I’d have to spend a couple hours breaking into swift development and building up my program off of examples. Claude was able to create a basic version of the game I wanted off of a detailed prompt. It didn’t manage to create the game in one shot, I had to edit the code a bit to get it to run. But it turned a project that would have taken me a few days into one that took a few hours.