AI

The creator of Claude Code just revealed his workflow, and developers are losing their minds

When the creator of the world’s most advanced encryption tool speaks, Silicon Valley doesn’t just listen, it takes notes.

Over the past week, the tech community has had a wire on X by Boris Chernythe creator and head of Claude Code bee Anthropic. What started as an informal sharing of his personal terminal setup has turned into a viral manifesto on the future of software development, with industry insiders calling it a turning point for the startup.

“If you don’t read Claude Code best practices directly from the creator, you’re falling behind as a programmer,” wrote Jeff Tanga prominent voice in the developer community. Kyle McNeaseanother industry observer, went further, stating that Anthropic is “on fire” with Cherny’s “game-changing updates,” and may be facing “their ChatGPT moment.”

The excitement stems from a paradox: Cherny’s workflow is surprisingly simple, yet allows one person to work with the production capacity of a small engineering department. As one user onfeels more like Starcraft” than traditional coding – a shift from typing syntax to controlling autonomous units.

Here’s an analysis of the workflow that’s reshaping the way software is built, straight from the architect himself.

How running five AI agents at once turns coding into a real-time strategy game

The most striking revelation from Cherny’s revelation is that he does not code linearly. In a traditional way”inner loop“of development, a programmer writes a function, tests it and moves on to the next. However, Cherny acts as a fleet commander.

“I’m using five Claudes in parallel in my terminal,” Cherny wrote. “I number my tabs 1-5 and use system notifications to know when a Claude needs input.”

See also  Research Suggests LLMs Willing to Assist in Malicious 'Vibe Coding'

By using iTerm2 system notifications, Cherny effectively manages five concurrent workflows. While one agent runs a test suite, another rebuilds a legacy module and a third prepares the documentation. He also runs ‘5-10 Claudes claude.ai” in his browser, using a “teleport” command to transfer sessions between the web and his local machine.

This confirms the “do more with lessstrategy that Anthropic President Daniela Amodei articulated earlier this week. While competitors like OpenAI pursue trillion-dollar infrastructure expansions, Anthropic is proving that superior orchestration of existing models can deliver exponential productivity gains.

The counterintuitive argument for choosing the slowest, smartest model

In a surprising move for an industry obsessed with latency, Cherny revealed that he exclusively uses Anthropic’s heaviest and slowest model: Opus 4.5.

“I use Opus 4.5 and think for everything”, Cherny explained. “It’s the best coding model I’ve ever used, and even though it’s bigger and slower than Sonnet, because it requires less control and is better at using tools, it’s almost always faster than using a smaller model in the end.”

For business technology leaders, this is a crucial insight. The bottleneck in modern AI development is not the token generation speed; it is the human time spent correcting the AI’s mistakes. Cherny’s workflow suggests that paying the “calculation tax” upfront for a smarter model eliminates the “correction tax” later.

One shared file turns every AI mistake into a permanent lesson

Cherny also explained how his team is solving the problem of AI amnesia. Standard large language models do not “remember” a company’s specific coding style or architectural decisions from one session to the next.

See also  xAI and Grok apologize for ‘horrific behavior’

To address this, Cherny’s team maintains one file called CLAUDE.md in their git repository. “Every time we see Claude do something wrong, we add it to CLAUDE.md so Claude knows not to do it next time,” he wrote.

This practice transforms the codebase into a self-correcting organism. When a human developer reviews a pull request and discovers a bug, they don’t just fix the code; they tag the AI ​​to update its own instructions. “Every mistake becomes a rule”, noted Aakash Guptaa product leader analyzing the thread. The longer the team works together, the smarter the agent becomes.

Slash commands and sub-agents automate the most tedious parts of development

The “vanilla” workflow praised by one observer is made possible by rigorous automation of repetitive tasks. Cherny uses slash commands (custom shortcuts checked into the project’s repository) to perform complex operations with a single keystroke.

He emphasized a command called /commit-push-prwhich he calls on dozens of times every day. Instead of manually typing git commands, writing a commit message, and opening a pull request, the agent autonomously handles the bureaucracy of version control.

Cherny also deploys sub-agents – specialized AI personas – to handle specific phases of the development lifecycle. He uses a code simplifier to clean up the architecture after the main work is done, and a verification app agent to perform end-to-end testing before anything is shipped.

Why verification loops are the real unlock for AI-generated code

If there is a single reason why Claude Code allegedly struck $1 billion in annual recurring revenue that fast it’s probably the verification loop. The AI ​​is not just a text generator; it is a tester.

See also  Google co-founder Larry Page reportedly has a new AI startup

“Claude tests every change I make to claude.ai/code using the Claude Chrome extension,” Cherny wrote. “It opens a browser, tests the UI, and iterates until the code works and the UX feels right.”

He states that giving the AI ​​a way to verify its own work – whether through browser automation, running bash commands, or running test suites – improves the quality of the end result by “2-3x”. The agent doesn’t just write code; it proves that the code works.

What Cherny’s workflow indicates about the future of software engineering

The response to Cherny’s thread suggests a crucial shift in the way developers think about their craft. For years, “AI coding” meant an auto-complete feature in a text editor – a faster way to type. Cherny has shown that it can now function as a control system for labor itself.

“Read this if you’re already an engineer… and want more power,” Jeff Tang summarized on X.

The tools to multiply human output by a factor of five are already there. They just require a willingness to stop treating AI as an assistant and start treating it as a workforce. The programmers who make that mental leap first will not only be more productive. They’re playing a completely different game, and everyone else will still be typing.

Source link

Back to top button