Harrison Stoneham
How Claude Code Changed the Way I Work

How Claude Code Changed the Way I Work

7 min read

I didn’t get into AI through some grand strategy. I started tinkering with OpenAI’s API because I wanted to build research tools — ways to pull together deal data, surface patterns, speed up the parts of due diligence that felt like they should already be automated. It worked well enough. I’d write a prompt, get output, stitch things together with a little code. Useful. But I was still running everything. The machine was doing what I told it, when I told it.

Then I started using Cursor.

That’s when it first felt different. You could describe what you wanted and watch it build — not just autocomplete a line, but actually reason about the structure of the thing. I spent a few weeks genuinely surprised by what it would produce. But I was still in the loop. Still reviewing every decision, still redirecting when it went sideways.

Then I moved to Claude Code, and something shifted in a way I’m still processing.


The difference wasn’t speed. It was context. Claude Code could see the whole project — every file, every prior decision, the schema, the architecture, the way things connected. Not just the file I had open. The entire directory. And when you give a model that much context, the output stops feeling like assisted writing and starts feeling like working with someone who’s actually read the project.

I’d give it a prompt. It would spin up a todo list, break the work into tasks, and start building. I’d come back and it had done something. Something real. Not a draft I needed to fix — a thing that worked.

Howard Marks, in his February memo AI Hurtles Ahead, described the same unlock from the outside. Someone built him a nine-module curriculum specifically tailored to his intellectual frameworks — referencing his past memos, his mental models, the way he thinks about investing. Claude had read everything he’d ever written and used it. His reaction:

“I want to try to communicate the level of awe with which I viewed Claude’s output. It read like a personal note from a friend or colleague. It made reference to things I’ve talked about in past memos, like the sea change in interest rates and the pendulum of investor psychology, and it used them in metaphors related to AI.”

That’s the context window doing its job. When the model knows your whole world — your history, your patterns, how you think — the output is completely different from a search result. It’s not pulling up information. It’s actually using it.


Marks lays out a framework in the memo that I think is the clearest description of where we’ve been and where we are. He calls it three levels.

Level 1 is chat AI — you ask questions, it answers. It saves research time. Level 2 is tool-using AI — it searches, analyzes, performs tasks when instructed. Level 3 is autonomous agents. The user sets a goal. The agent works, checks its own output, and delivers something finished.

His line on the difference between Level 2 and Level 3:

“This is labor replacement at the task level. Not assistance — replacement.”

And then this, which Claude apparently wrote about itself, and which Marks published to his investors:

“Level 3 agents are the automobile. They don’t make the work faster. They do the work.”

I’ve been thinking about that sentence since I read it. Claude wrote it. Marks published it to his investors. I read it while using Claude to work. I’m not sure what to do with that, but I keep coming back to it.

My own journey maps almost exactly onto his framework. The API keys were Level 1. Cursor was Level 2. Claude Code with Ralph Loop — an agent that runs the same prompt in an iterative loop against your entire codebase, improving on its own previous work until a condition is met — that’s the edge of Level 3. I’d set it running before I went to sleep. I’d wake up and it had made real progress. Nobody supervised it. It just worked.


Now here’s the thing I didn’t have on my bingo card: Howard Marks writing a memo about Claude Code.

Marks runs Oaktree Capital. He’s one of the greatest credit investors alive — the guy who wrote the book on distressed debt, who thinks in terms of credit cycles and the psychology of risk, who spent his career in the least glamorous corner of finance and built one of the most respected investment firms in the world. He is about as far from a Silicon Valley AI enthusiast as a person can get.

And yet in February 2026, he published a memo where he discusses Claude Code by name, explains the difference between training and inference, walks through the context window as a key unlock, and quotes Claude explaining its own economic impact to his investors. He wrote at one point:

“Nothing has ever taken hold at the pace AI has. It’s able to change the world at a speed that approaches instantaneous, outpacing the ability of most observers to anticipate or even comprehend.”

When Howard Marks is writing about Claude Code in the same memo where he’s thinking about investment implications and labor markets — that’s not a tech story anymore. That’s a signal that something has crossed over. The two worlds collided, and I don’t think they’re separating.


But here’s the counterweight, and it’s important.

Even when something is real and proven, dispersion is slow. The iPhone launched in 2007. It captured 2.5% of the US mobile market that year. It took until 2010 for it to feel like a finished product that anyone would want. It took until 2015 — nearly a decade after the announcement — for Apple to be selling 200 million units a year. The technology was obviously transformative from day one. The S-curve still took a decade to play out.

I think about this every time someone gets in my Tesla.

Full self-driving works. I’ve been using it long enough that I don’t think about it much anymore. The car drives itself through city streets, handles intersections, navigates around cyclists and construction zones. I don’t touch the steering wheel. And without fail, the first time someone rides with me and the steering wheel moves on its own through a complicated turn, they go completely quiet. Sometimes they grab the door handle.

The technology is real. It works. I live with it every day. And people are still shocked.

That’s dispersion. The gap between “this exists and works” and “everyone uses it without thinking about it” is longer than anyone expects, every single time. AI will follow the same curve. The question for investors isn’t whether it’s real — Marks settles that, and I agree with him. The question is where we are on the S-curve, and what that means for when and where the value actually lands.


My sense is we’re somewhere past the inflection point but nowhere near the flat top. Claude Code reaching Howard Marks is a data point. The steering wheel still shocking people is a data point. Both things are true at once.

I used to think the bottleneck was the technology. Now I think the bottleneck is how fast people can update their mental model of what’s possible. The technology is running ahead of that. It’s been running ahead of that for a while.

Next week I want to get into what that looks like at the frontier right now — the tools that launched in the last 30 days that are taking this further than I thought was possible this soon. The loop that runs while you sleep.