Common AI tropes that scare me in 2025

The “@grok is this true” phenomenon

On X users often reply to a post with “@grok is this true” which triggers Elon’s LLM to respond to their post. As UX go it works pretty well. The problem is that people use it for the most mundane questions. It really emphasizes how little the average person knows about things. Even worse people attempt to use “@grok is this true” in online arguments. I’ve also had people attempt to use their ChatGPT Deep research output in arguments against me. This is a problem. Because LLMs will say whatever you want them to. You can’t use an LLM in an argument. An LLM is not a reliable source. But more importantly using LLMs in arguments is bad because it makes arguing with you comparable to a conversation with an LLM. Why bother talking to you if it’s the same as talking to an LLM? 

The “LLMs will never achieve intelligence/AGI/my standard” trope

You don’t know that. You cannot prove a negative. No one understands this 5 year old technology well enough to say how far the thousands of engineers and trillions of dollars being spent on it will go. It might get stuck somewhere. It might keep going beyond AGI into the much imagined singularity. You like the rest of us have no idea where this is going. Wait until we go a full calendar year without an improvement coming out in some aspect of LLM based AI functionality. 

The “Drone swarms” trope

Drone swarms are not a real thing. 99% of drones used in anger have a human controlling them every second of flight. The remaining drones have a human specify their course and target manually ahead of time. AI is nowhere near as good at flying drones as humans are. FPV drones are terrifying because they allow a human to precisely target an explosive charge remotely. AI is nowhere near that good at flying anything. 

The other problem with drone swarms is that they do not make a ton of sense. The advantage of drones is that they allow you to precisely target things with cheap explosives. They are extremely cheap precision guided munitions. The common FPV drone has a maximum flight time of around 30 minutes. That’s not a lot of time for swarming. So if your drone doesn’t find something to hit in that time, it’s falling out of the sky. If you have a drone swarm with thousands of drones, you need thousands of targets for those drones to attack. The battlefield at least in Ukraine doesn’t provide any place with thousands of targets within drone swarm range. 

Drone swarms make sense for two types of attacks. They make sense on the first day of a war if you have the element of surprise and are targeting a strategic asset. Basically Pearl Harbor but with drones. Secondly, drone ‘swarms’ make sense for terrorism against civilian targets. Terrorists could generate a lot of terror and have a good chance of living long enough to run away. Won’t be fun for them when they get caught though. 

Have you written your last for loop? 

Lately I’ve been doing a lot of coding using the AI Agent Claude Code. It is a terminal application which allows you to chat to it live while it attempts to code whatever you direct it to. My game Forward Blaster  https://github.com/Sevii/forward-blaster was written by AI except for a few places where I edited constants. Needless to say the game includes a lot of loops. But I didn’t write any of them. I never felt the need to leave the AI agent workflow and start manually coding. 

Which leads to the question ‘Have you written your last for loop?’ I don’t think I have but it’s very likely that time is coming. AI coding agents are quite good. You can tell them what type of iterator to use if you like. But in general it doesn’t matter a lot. 

AI Agents also assist with debugging and architecture but not to the degree they do at writing code. LLMs can recognize many simple bugs and fix them without much human interaction. Where they really struggle is functionality that is not to specification. Even if you’ve given it the specification it cannot consistently recognize if something works but in the wrong way. And if there is a ‘one right way’ in this particular context your best bet is to tell the AI to use that ‘one right way’ specifically. 

Software engineering is changing a lot. We used to write blog posts about minimizing toil so that engineers could focus on value add parts of the job. Few expected we could eliminate the time spent coding (the fun part) freeing up time for other engineering activities like planning, architecture, estimating, security review, oncall and the like. 

Checkout my latest book Code Without Learning to Code on Amazon!

Vibecoding Demo

I released a short demo of AI assisted vibe coding to the youtube channel. Basic games like this are one of my favorite use cases for vibecoding. LLMs are pretty good at it and the pay off is quick. Back in my teens I tried to learn programming to make games. I’d have gotten a lot farther if we’d had AI then instead of trying to figure out C++ from a two inch think book!

To learn more about AI assisted ‘vibecoding’ check out the book Code Without Learning to Code at https://codewithoutlearningtocode.com.

The original launch of Copilot soured a lot of people on AI for coding

In 2023 I was one of the many software engineers whose management pushed them into trying Github Copilot. I was not impressed. At the time Copilot was basically pointless for Java developers. IntelliJ already had Intellisense which did everything Copilot did, but more deterministically. 

In retrospect autocomplete was not the use case for AI to assist in the programing process. Personally, I find the AI chatbots to be very useful coding assistants for writing scripts and functions. And this last year AI coding Agents have really come into their own. 

I’ve never been a python guy despite using it for light scripting tasks for a decade. But lately I’ve been using it a lot more because Claude can consistently create working 500~ line python scripts in a couple prompts.  The days when AI spit out code that didn’t even run are mostly in the past now. 

AI is just much better at coding now than it was in 2023. And I suspect the ‘autocomplete’ use case is just a bad one for AI. The more powerful models tend to produce output more slowly. You are looking at a 15+ second wait with edge models. In Claude Code and other agents, multiple 15+ second waits is pretty annoying. It’s like compiling huge java projects. For me I find the chatbot model works great. You write up a prompt, provide examples and context then Claude spits out a running program. You test it and iterate. Autocomplete has the problem that edge models are never going to have the latency that you want when you push ctrl-shift-enter. It just feels better to use an agent or chatbot. 

Code Without Learning to Code

https://codewithoutlearningtocode.com

I’m working on a new book on Vibe Coding for people who don’t know how to code. Using LLMs to build simple programs unlocks a lot of programming ability for people who either tried and failed to learn to program or never got started. You no longer need to learn the basics of programming logic or syntax to build useful programs to solve your problems. 

The current target of the book is people who don’t know how to code but are willing to learn to run and test computer programs they create through Vibe Coding. 

As part of this process I’m doing some research comparing free and paid LLM models for programming use. 

For each model I pasted the same prompt in and took the first result. I saved the code into a folder which already had pygame installed via pip and ran it directly.

“Please create a game for me using python and pygame. In the game the player should navigate a 2d space using the arrow keys. In this game there should be a maze like region with rocks and stalagmites. Inside the region should be chests which contain gold. The player should be able to navigate the maze and collect gold from the chests.”

Anthropic Claude Haiku 3.5 (free)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/haiku35_chest_game.py

Anthropic Claude Sonnet 3.7 (paid)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/claude37_sonnet_chest_game.py

Gemini 2.0 Flash (free)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/gemini_2_flash_chestgame.py

Gemini 2.5 Pro Experimental (paid)

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/gemini_25_pro_experimental_chest_game.py

ChatGPT (free) 

https://github.com/Sevii/vibecoding/blob/master/MakingGames/BlogPost_LLM_Comparison/free_chatgpt_chest_game.py

It’s interesting to see how paid models differ from free models. But we are getting working code on the first pass from both free and paid models.