Almost one year ago, I wrote about how I was jumping into AI as a skeptic. It was funny enough timing that I then joined an AI startup, ReflexAI, that May and was then thrust into the forefront of what appeared to be a fundamental change to the technology industry, even though its value was unproven.

A year’s worth of progress

So here are now at the start of 2026 and the mania has only increased since then. Models have gotten extremely capable. The arms race that we’ve seen across OpenAI, Anthropic, Google and more has been stunning to watch with each doing everything they can to try to one up other other. While still not perfect, the amount of effort it takes to craft a “good enough” prompt is far lower than it used to be.

At the time of this writing, Anthropic’s Opus 4.5 (and now 4.6) is the gold standard of models for coding. While more expensive than others, the quality of its output is shocking to say the least.

Harnesses like Claude Code, Codex and Opencode provide excellent developer workflow capabilities that know how to best leverage the underlying models. Long gone is the expectation that these LLMs are only good for chat-based experiences. Now it takes little to no effort to get a fully customizable workflow going to build whatever you’d like - code based or even productivity enhancements to help you perform most computer-based tasks.

Agents are now also expected to be easy deployed into an accessible state by other tools, making your workflows and capabilities used by whatever modality you choose. Being able to send a message on your preferred messaging platform and having the work done in an entirely isolated sandbox is quite a powerful capability.

Making sense of it all

I ended 2025 reviewing some new metrics about “% of code written by LLMs” for the Software Engineering department at ReflexAI. Our CEO, asked me if our ~40% was good or bad - and I honestly replied that I didn’t have a sense yet of what it actually meant. Now, two months in to the year, I’m already seeing this metric increasing greatly.

The CEO of Anthropic got into some hot water when he famously claimed in March 2025 that 90% of code would be written by LLMs in 3-6 months. While that timeframe ended up being slightly aggressive, I’m starting to see trends where that 90% is going to be closer to where a lot of Engineers land than not.

I still don’t know if that’s a good thing long term. It means we’re able to produce an incredible amount of work in a much shorter period of time - but what we don’t know is what future maintenance, extensibility of platforms or long term stability of these platforms will actually look like.

How comfortable will we be triaging production issues in systems we “orchestrated” but didn’t build. Is it that different than working at an organization where you inherited systems from previous developers, only to have to own them moving forward? Probably not, and odds are a LLM like Claude will help you ramp up your understanding of the system faster than before.

How do I feel about it now?

I’m far less of an AI skeptic than I was previously. The tooling has continued to get better and better, and I’m starting to see the fruits of the work across our teams at ReflexAI. I think we’re less at a stage of FOMO, and more of we’d be remiss not to continue innovating and pushing the boundaries of what’s possible to be more efficient at our jobs. That’s not a new concept and why I think LLM driven development is here to stay.