Code Without Learning to Code

I’m working on a new book on Vibe Coding for people who don’t know how to code. Using LLMs to build simple programs unlocks a lot of programming ability for people who either tried and failed to learn to program or never got started. You no longer need to learn the basics of programming logic or syntax to build useful programs to solve your problems. 

The current target of the book is people who don’t know how to code but are willing to learn to run and test computer programs they create through Vibe Coding. 

As part of this process I’m doing some research comparing free and paid LLM models for programming use. 

For each model I pasted the same prompt in and took the first result. I saved the code into a folder which already had pygame installed via pip and ran it directly.

“Please create a game for me using python and pygame. In the game the player should navigate a 2d space using the arrow keys. In this game there should be a maze like region with rocks and stalagmites. Inside the region should be chests which contain gold. The player should be able to navigate the maze and collect gold from the chests.”

Anthropic Claude Haiku 3.5 (free)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/haiku35_chest_game.py

Anthropic Claude Sonnet 3.7 (paid)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/claude37_sonnet_chest_game.py

Gemini 2.0 Flash (free)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/gemini_2_flash_chestgame.py

Gemini 2.5 Pro Experimental (paid)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/gemini_25_pro_experimental_chest_game.py

ChatGPT (free) 

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/free_chatgpt_chest_game.py

It’s interesting to see how paid models differ from free models. But we are getting working code on the first pass from both free and paid models. 

Reducing toil with AI

LLMs are at the point where they can reduce toil for software engineers across a host of use cases. Here we will explore a few I’ve thought of.

Reduces time spent on toil 

  • Resize this button -> AI
  • Refactor this function -> AI 
  • Connect these endpoints -> AI 
  • Write more tests -> AI 
  • Add more comments -> AI 
  • Create PlantUML diagrams from a sketch -> AI 
  • First pass code reviews -> AI

Reduces time spent researching small things

  • AI is a better stack overflow
  • Examine stack traces 
  • Easier to write architecture documents 
  • Faster development of small utilities

What AI still cannot do

  • Test if a library will work for your use case 
  • Respond to outages 
  • Decide the product direction
  • Argue with stakeholders 
  • Yell at people who want to do stupid things

A few opportunities to reduce operational toil

  • AI can review graphs and notice changes
  • AI can check if a website is down
  • File bug reports 

For me the most valuable use of Claude as a coding assistant has been how it makes getting started much easier. Usually, to program a swift game I’d have to spend a couple hours breaking into swift development and building up my program off of examples. Claude was able to create a basic version of the game I wanted off of a detailed prompt. It didn’t manage to create the game in one shot, I had to edit the code a bit to get it to run. But it turned a project that would have taken me a few days into one that took a few hours.

Links October 2024

Great article on mac setup

Lots of tools are outdated by default. Got to take advantage of this blog post since I got a new laptop this summer.

https://matt.sh/setup-2021-late

Operating Systems use out of date assumptions

Interesting talk on how modern systems on a chip differ from how we imagine computers to actually work. Over the last two decades hardware has gradually abstracted away many features from the actual operating system. The computer board is now a collection of sub-computers which cooperate to mimic the operation of an actual traditional computer. 

The system describe here is reminiscent of a micro service environment where you need to communicate with many different protocols to get the job done. I wonder if we can explore migrating ‘cloud’ microservice techniques down to the CPU level. 

Passive Radar for finding meteors

You can detect meteors using a set of clock synchronized radios. The way it works is you monitor a reference frequency at perhaps 180MHz and can detect changes in it as meteors burn up in the atmosphere. 

https://en.wikipedia.org/wiki/Passive_radar
https://britastro.org/wp-content/uploads/2019/11/Detection_of_meteors_by_RADAR.pdf

A clear sign you are overdoing microservices 

Fine grained services

Microservices have been the thing for over 15 years. They are great in large companies with CI/CD environments. But as your situation drifts farther away from the ideal microservice use case traps abound. 

Building a new service for one endpoint 

If you find yourself having a conversation where you need to create a new endpoint somewhere, but adding it to any of your existing services would break the concept of that microservice. Turning it instantly into a ball of mud with no clear purpose. You have fallen into this trap. microservice does not mean each service has only one HTTP endpoint. That use case is better served with Cloud functions like AWS lambda. 

The problem here is that we have gone too far in splitting up the monolith. Splitting a monolith with 100 HTTP endpoints into a dozen or so services with eight endpoints each is great. Splitting up a monolith with 100 endpoints into 100 services is counter productive. Instead of having an actual purpose the single endpoint microservice becomes the xyz endpoint microservice. 

Endpoints are things that microservices empower. An endpoint in of itself should never justify the creation of a microservice. 

Up in the air

We are in a phase where planning becomes quite difficult. ChatGPT has started a capitalistic AI war. Microsoft swept in to shepherd commercialization. Google is on the back foot for now. Amazon will launch something I have no doubt. ChatGPT style tech would make Alexa viable by solving the fractal conversation problem. 

The players are moving, immense amounts of capital has been unleashed. But for us on the outside it’s a difficult time. You can’t really plan for the future. Because the technology is advancing rapidly and is already transforming jobs in various industries.

GPT-4 has been in the news, but Midjourney has quietly advanced to the point where it is transforming job tasks in the graphic design industry. I read a complaint by a graphic designer this weekend describing how his job has become more prompt engineering than graphic design. Instead of needing to draw things he and his peers can now use AI image generation and then clean it up in photoshop. 

Video created by demonflyingfox using MidJourney V4.

In 2022 I ordered physical versions of two AI generated images that I thought were incredible examples of what AI could do. In 2023 these images are somewhat quaint. AI image generation can do so much more now. 

We don’t really know where things are going. How do you prepare exactly when the potential paths are so divergent? 

Some people claim AI will replace programmers. Others say we will never not need people to dig deep into the technical details. Personally, I lean towards the second. If AI coding hasn’t peaked yet we will likely see a 1000x increase in the amount of code being written. ChatGPT is quite good at explaining things but will it be useful at explaining interactions between multiple programs it has written? We can’t know at this point. 

Image of a line going exponential. Credit to Luke Muehlhauser who created and watermarked this image.

We are in the straight line at the far right now. We’ve discovered something about meaning in these large language models. A mapping between language and image, and mappings between language and language. It’s not AGI, but much like Deep Blue its obviously eclipsed human capabilities in some way. 

Neal Stephenson’s ‘The Diamond Age’ is a book I was intrigued by in my younger years. In it a girl is given a AI powered book which acts as her tutor from a very young age.  Much like that fictional book ChatGPT likely will become every child’s tutor going forward. Much like the iPhone, you won’t be able to buy a better one. Children have already used ChatGPT to make homework and writing assignments obsolete. The education system likely will not survive this advancement. 

The sum total of human knowledge has been put into this machine. Everyone who ever wrote anything is part of it. Buckle up. Don’t panic. Hold on. Let’s see what happens next.