Why AI Investments makes sense 

With AI investment arguably exceeding $1 trillion over the last few years many people are concerned about a bubble. Unlike the constant doomsayers claiming ‘AI has peaked’ despite the last improvement being released a week ago, a bubble could be a concern. But frankly I don’t think we are in that much of a bubble. 

Anthropic has revenues around 7 billion a year at this point. Google has been profitable for a long time. OpenAI has raised the most money relative to its revenues but it has 700 million weekly active users. Once they roll out ads and monetization the economics will change quite a lot. 

Amazon wasn’t profitable for 20 years. OpenAI has only been a for profit corporation for three years. A ton of money is being spent. It feels like a lot of other bubbles we’ve had in the last few decades. Let’s go through the logic of further investment. 

Let’s say you are investing in racks of Nvidia chips. What determines your return on investment for those chips? Mainly utilization and your margin charged to cloud users. If there is a lot of demand for AI inference you will have high utilization. If your utilization is high you will be able to raise prices to increase your margin. It boils down to demand for inference and to an extent training. 

So an investment on racks of Nvidia chips depends on demand for AI inference to be profitable. So we have to ask will AI inference demand go up or down? What could make demand go down? What would make demand go up? 

An example of something that makes inference demand go up is the invention of chain of thought ‘Reasoning’ AI models These models use more inference to produce the same amount of higher quality output. If you were buying racks of Nvidia chips and you heard about chain of thought you would try to double your order. 

Something that might make inference demand go down is the original DeepSeek announcement. They managed to make a competitive model using far less training resources than anyone else. We had a minor stock market crash in reaction. 

Here is my basic argument for why investment in racks of chips are a good idea at this time. 

The ‘smarter’ models get the more demand for AI there will be. 

The more ways we figure out how to compose LLMs the more demand for AI there will be.

We’re currently in an immense competition between 5+ frontier AI labs to improve LLM based AI to the absolute limit. At this point we are seeing improvements month over month. When AI was useless, prior to GPT3, inference demand was very low. Today we have millions of people using ChatGPT each week. The better models get the more people want to use them. 

Next we are seeing more and more ways to compose models. Claude Code takes a single human command and splits it up into tasks which are converted into a myriad of smaller inference tasks. One human prompt might result in a dozen AI generated prompts. Instead of one inference call you are getting a dozen inferences calls serving just one human request. This approach lets AI serve human demands that weren’t possible for LLMs before with even more inference. 

Basically, the smarter the output of AI, the more value humans get out of it, the more demand there will be for inference. Even if we make efficiency improvements in inference that should simply increase demand. If you are making a profit per request you will increase the number of requests you make as the price per requests goes down. 

What’s the other side on this? Well, if AI stops getting smarter a lot of companies are going to make far lower returns on investment than they hoped. But I think that is a really stupid bet to make. You don’t bet against further improvements in AI when improvements are coming out month over month. 

In conclusion as long as LLM performance continues to improve we aren’t in an AI bubble. Once gains start to slow the bubble is over. My view is that if we get to a point where improvements are coming more slowly than once a year we will have hit the plateau in LLM based AI. But for now we are seeing month over month improvements in AI performance. I don’t think we are in a bubble. 

The cybersecurity bar has risen

Anthropic just released a report on a sophisticated Claude Code and MCP based hacking ring. Based on their estimates humans provided less than 10% of the decision making for the operation. The hack involved multiple requests per second to Anthropic APIs across dozens of ‘agents’. The attackers managed to infiltrate several technology firms and exfiltrated credentials. 

https://www.anthropic.com/news/disrupting-AI-espionage

If you’ve ever run a website on the open internet you have likely seen server logs of automated exploit programs. Hackers run bots which automatically attempt dozens of well known attacks against every server on the internet. These scripts attempt everything from sql injection to server stack specific vulnerabilities. If you aren’t keeping up with updates eventually they will get you. 

But these scripts were just scripts. A human found a vulnerability manually then added code to the script. Autonomous cyberattacks have historically attempted the same old hacks against every server. AI changes the game here. Now hackers can apply multiple ‘agents’ towards each site. These agents can dig through the code and analyze it for vulnerabilities. 

Recently Tata Motors was found to have a major AWS credentials exposed on publicly accessible web content. 

https://eaton-works.com/2025/10/28/tata-motors-hack

This is the kind of thing Claude Code can figure out today. Smart hyper scale hacking attacks are only going to escalate. There are already open weight models out there. The major Chinese labs will allow Chinese state sponsor hacking rings to user their AI for this purpose. 

What does this mean for cybersecurity? 

The bar just rose. Shitty cybersecurity isn’t going to cut it anymore. Vulnerabilities will be exploited within days of new code being shipped. You will be pwned instantly. Your vibe code will be vibe hacked just as fast as you can deploy changes. 

Common AI tropes that scare me in 2025

The “@grok is this true” phenomenon

On X users often reply to a post with “@grok is this true” which triggers Elon’s LLM to respond to their post. As UX go it works pretty well. The problem is that people use it for the most mundane questions. It really emphasizes how little the average person knows about things. Even worse people attempt to use “@grok is this true” in online arguments. I’ve also had people attempt to use their ChatGPT Deep research output in arguments against me. This is a problem. Because LLMs will say whatever you want them to. You can’t use an LLM in an argument. An LLM is not a reliable source. But more importantly using LLMs in arguments is bad because it makes arguing with you comparable to a conversation with an LLM. Why bother talking to you if it’s the same as talking to an LLM? 

The “LLMs will never achieve intelligence/AGI/my standard” trope

You don’t know that. You cannot prove a negative. No one understands this 5 year old technology well enough to say how far the thousands of engineers and trillions of dollars being spent on it will go. It might get stuck somewhere. It might keep going beyond AGI into the much imagined singularity. You like the rest of us have no idea where this is going. Wait until we go a full calendar year without an improvement coming out in some aspect of LLM based AI functionality. 

The “Drone swarms” trope

Drone swarms are not a real thing. 99% of drones used in anger have a human controlling them every second of flight. The remaining drones have a human specify their course and target manually ahead of time. AI is nowhere near as good at flying drones as humans are. FPV drones are terrifying because they allow a human to precisely target an explosive charge remotely. AI is nowhere near that good at flying anything. 

The other problem with drone swarms is that they do not make a ton of sense. The advantage of drones is that they allow you to precisely target things with cheap explosives. They are extremely cheap precision guided munitions. The common FPV drone has a maximum flight time of around 30 minutes. That’s not a lot of time for swarming. So if your drone doesn’t find something to hit in that time, it’s falling out of the sky. If you have a drone swarm with thousands of drones, you need thousands of targets for those drones to attack. The battlefield at least in Ukraine doesn’t provide any place with thousands of targets within drone swarm range. 

Drone swarms make sense for two types of attacks. They make sense on the first day of a war if you have the element of surprise and are targeting a strategic asset. Basically Pearl Harbor but with drones. Secondly, drone ‘swarms’ make sense for terrorism against civilian targets. Terrorists could generate a lot of terror and have a good chance of living long enough to run away. Won’t be fun for them when they get caught though. 

Have you written your last for loop? 

Lately I’ve been doing a lot of coding using the AI Agent Claude Code. It is a terminal application which allows you to chat to it live while it attempts to code whatever you direct it to. My game Forward Blaster  https://github.com/Sevii/forward-blaster was written by AI except for a few places where I edited constants. Needless to say the game includes a lot of loops. But I didn’t write any of them. I never felt the need to leave the AI agent workflow and start manually coding. 

Which leads to the question ‘Have you written your last for loop?’ I don’t think I have but it’s very likely that time is coming. AI coding agents are quite good. You can tell them what type of iterator to use if you like. But in general it doesn’t matter a lot. 

AI Agents also assist with debugging and architecture but not to the degree they do at writing code. LLMs can recognize many simple bugs and fix them without much human interaction. Where they really struggle is functionality that is not to specification. Even if you’ve given it the specification it cannot consistently recognize if something works but in the wrong way. And if there is a ‘one right way’ in this particular context your best bet is to tell the AI to use that ‘one right way’ specifically. 

Software engineering is changing a lot. We used to write blog posts about minimizing toil so that engineers could focus on value add parts of the job. Few expected we could eliminate the time spent coding (the fun part) freeing up time for other engineering activities like planning, architecture, estimating, security review, oncall and the like. 

Checkout my latest book Code Without Learning to Code on Amazon!

Vibecoding Demo

I released a short demo of AI assisted vibe coding to the youtube channel. Basic games like this are one of my favorite use cases for vibecoding. LLMs are pretty good at it and the pay off is quick. Back in my teens I tried to learn programming to make games. I’d have gotten a lot farther if we’d had AI then instead of trying to figure out C++ from a two inch think book!

To learn more about AI assisted ‘vibecoding’ check out the book Code Without Learning to Code at https://codewithoutlearningtocode.com.