
The tech industry's love affair with AI is hitting some brutal reality checks. As I sit here analysing the latest developments, I can't help but feel we're witnessing a perfect storm of ambition, desperation, and good old-fashioned chaos.
The AI Gold Rush Claims Its Victims
Oracle's recent massacre of thousands of jobs whilst simultaneously ramping up AI spending perfectly encapsulates the schizophrenic nature of tech in 2026. I've been in this industry long enough to recognise the pattern: companies are betting the farm on AI whilst forgetting they need actual humans to make it work.
What strikes me most about Oracle's move isn't the redundancies themselves – we've seen this playbook before. It's the timing. Here's a company that's supposedly positioning itself for the AI revolution, yet it's cutting the very people who understand its systems, maintain its infrastructure, and keep the lights on. It's like building a rocket ship whilst firing your engineers.
The irony isn't lost on me. These tech giants preach about AI augmenting human capabilities, not replacing them. Yet when push comes to shove, the spreadsheet warriors in the C-suite see employees as line items to be erased in favour of shinier AI investments. You can't automate your way to innovation – trust me, I've tried.
When Autonomous Vehicles Forget How to Drive
Speaking of AI promises falling flat, Baidu's Apollo Go self-driving cars stopping mid-traffic in China is exactly the kind of reality check this industry needs. I've been following the autonomous vehicle space since the early promises of "full self-driving by 2020" (remember those?), and here we are in 2026 with cars that can't figure out basic traffic situations.
What makes this particularly damning is that Baidu isn't some startup playing with toy cars. This is one of China's tech titans, with resources that would make most companies weep with envy. Yet their vehicles are stopping in the middle of traffic, creating hazards and presumably giving their passengers minor heart attacks.
The fundamental problem with autonomous vehicles hasn't changed since I first wrote about them years ago: driving isn't just about following rules, it's about understanding context. A human driver sees construction ahead and intuitively knows to slow down and merge. An AI sees an unexpected obstacle and sometimes just… stops. In the middle of the road. Brilliant.
I'm not saying self-driving cars will never work. I'm saying that the gap between "works in controlled conditions" and "works in the chaos of real-world traffic" is far wider than Silicon Valley wants to admit. Every time I see these incidents, I'm reminded that we're beta testing potentially lethal technology on public roads.
The Subscription Economy Eats Its Own Tail
The news about Claude Code users hitting usage limits faster than expected reveals another uncomfortable truth about the AI revolution: these systems are expensive to run, and someone has to pay for them.
I use AI coding assistants daily in my web development work, and while they're genuinely useful, the business model is fundamentally broken. Companies are burning through venture capital to subsidise usage, hoping to hook developers before reality sets in. When users hit those limits "way faster than expected," it tells me the providers themselves don't understand their own economics.
This isn't sustainable. We're creating a generation of developers dependent on AI tools that might price themselves out of existence. What happens when the VC money dries up and these services need to charge what they actually cost to run? I predict a rude awakening for many who've built their workflows around cheap or free AI assistance.
The smart money is on learning to use these tools efficiently whilst maintaining the ability to code without them. AI should amplify your skills, not replace them. When Claude or GPT or whatever the flavour of the month is becomes too expensive or restrictive, you need to be able to fall back on actual knowledge.
Cybersecurity: The Eternal Afterthought
Hasbro getting hit by a cyber attack might seem like small potatoes compared to AI revolutions and job massacres, but it highlights a persistent problem in tech: security is still treated as an afterthought. When a company that owns beloved children's brands like Peppa Pig and Transformers gets breached, it's not just corporate data at risk – it's potentially millions of children's information.
I've spent years banging on about WordPress security, and the same principles apply here. Most breaches aren't sophisticated nation-state operations – they're criminals exploiting basic vulnerabilities that should have been patched months ago. Yet companies continue to underinvest in security whilst chasing the next shiny thing.
What's particularly galling is that we have the tools and knowledge to prevent most attacks. It's not rocket science: keep your systems updated, train your staff, implement proper access controls, and for the love of all that's holy, stop storing sensitive data in plain text. But that's not exciting. That doesn't get venture capital funding. That doesn't make headlines until something goes catastrophically wrong.
The Hasbro breach is a reminder that while we're all distracted by AI and automation, the basics still matter. You can have the most advanced AI system in the world, but if someone can walk in through an unpatched vulnerability, what's the point?
The Human Cost of Tech’s Identity Crisis
Looking at these stories collectively, I see an industry in the midst of an identity crisis. We're simultaneously racing towards an AI-powered future whilst failing at basic blocking and tackling. Oracle cuts thousands of jobs to fund AI development. Baidu's AI cars can't navigate traffic. AI coding tools can't sustainably price their services. Major corporations can't secure their systems.
What connects all these failures is a fundamental misunderstanding of what technology is for. Tech should solve human problems, not create new ones. When your self-driving car causes traffic jams, your AI investments cost thousands of jobs, and your systems leak customer data, you're not innovating – you're just moving problems around.
I've been in this industry since the dial-up days, and I've never seen such a disconnect between promise and delivery. The marketing speaks of revolution, transformation, and paradigm shifts. The reality is broken systems, unsustainable business models, and a complete disregard for the human element.
Where Do We Go From Here?
Despite my criticisms, I'm not a Luddite. I use AI tools daily, appreciate autonomous vehicle research, and understand why companies need to evolve. But evolution doesn't mean abandoning common sense.
Here's what I think needs to happen: First, we need to stop treating AI as a silver bullet. It's a tool, not a messiah. Oracle laying off thousands whilst investing in AI is like selling your car to buy petrol – strategically nonsensical.
Second, we need to get serious about sustainable business models. The current VC-subsidised AI free-for-all will end, and when it does, a lot of companies and developers will be left scrambling. Better to prepare now than react later.
Third, basics matter. Security isn't optional. Testing isn't optional. Having humans who understand your systems isn't optional. You can't innovate your way past fundamental requirements.
Finally, we need honest conversations about what AI can and can't do. Baidu's traffic-stopping cars aren't failures of technology – they're failures of expectation management. Set realistic goals, test thoroughly, and don't beta test on the public without their informed consent.
The tech industry in 2026 feels like watching a teenager with a credit card – lots of enthusiasm, lots of spending, not much wisdom. We're making the same mistakes we've always made, just with fancier technology and bigger price tags.
As someone who's built their career on technology, I want to see us do better. Not just newer or faster or more automated – actually better. That means thinking about consequences, planning for sustainability, and remembering that at the end of the day, technology exists to serve humans, not the other way around.
The stories this week aren't isolated incidents – they're symptoms of an industry that's lost its way. Until we remember that good technology is boring technology that just works, we'll keep seeing these predictable disasters. And I'll keep writing about them, probably with increasing levels of frustration.
Frequently Asked Questions
Why are tech companies cutting jobs whilst investing in AI?
Companies believe AI will eventually reduce operational costs and increase efficiency. They're cutting current workforce to fund AI development, betting that future automation will more than compensate for lost human expertise – though I think this is terribly short-sighted.
Are self-driving cars actually safe?
Current self-driving technology is impressive in controlled conditions but struggles with unexpected real-world scenarios. While they may be statistically safer than human drivers in some situations, incidents like Baidu's traffic stops show we're not ready for full autonomy.
Should developers worry about AI replacing their jobs?
AI tools are making developers more productive, not replacing them entirely. However, developers who don't adapt and learn to work with AI tools may find themselves at a disadvantage. The key is using AI to amplify your skills whilst maintaining core competencies.




