Claude Code Source Code LEAK (How to build it yourself)

If You Like Our Meta-Quantum.Today, Please Send us your email.

Introduction

On what turned out to be a remarkable day for the AI community, Anthropic accidentally exposed the source code of Claude Code — its flagship AI coding tool — through an engineer’s mistake. Unlike the earlier “Mythos” model leak from the same week, this incident was far larger in scope: a single included .map file (roughly 60MB) contained the entire minified, obfuscated TypeScript source, which could be decompiled into approximately 200,000 TypeScript files and 600,000 lines of readable code. Within hours, the internet had cloned it tens of thousands of times — one GitHub mirror alone amassed 32,000 stars and 44,000 forks. Anthropic confirmed and pulled the repository, but as the host Wes Roth notes, by then it was far too late.

Rebuilding Claude Code

Rebuilding Claude Code from the leaked source code is entirely feasible. The community has already prepared a complete build toolchain—you can either use pre-built binaries or compile it yourself from source.

🚀 Option 1: Use Community Pre-built Binaries (Simplest)

If you don’t want to deal with compilation details, you can directly use single-file executables already built by the community.

1. Get the Pre-built Version

Visit the Releases page of the winfunc/claude-code-single-binary repository and download the binary corresponding to your operating system.

Your SystemRecommended File
macOS (Apple Silicon M1/M2/M3)claude-code-macos-arm64
macOS (Intel)claude-code-macos-x64
Linux (x86_64, e.g., Ubuntu)claude-code-linux-x64
Linux (ARM, e.g., Raspberry Pi)claude-code-linux-arm64
Windows (x86_64)claude-code-windows-x64.exe

Tip: Files with modern in the name are optimized for newer CPUs, while those with baseline offer the best compatibility. If you encounter an Illegal instruction error on older hardware, try the baseline version.

2. Run It

After downloading, grant execution permission in your terminal and run it:

# For macOS/Linux
chmod +x ./claude-code-macos-arm64
./claude-code-macos-arm64

🛠️ Option 2: Build Manually from GitHub Source

If you prefer to build it yourself, or want to explore the code and enable hidden features (like BUDDY pet, KAIROS mode), follow these steps.

1. Obtain the Source Code

Choose one of the backup repositories to clone. Using pengchengneo/Claude-Code as an example—it’s explicitly labeled as “runnable”:

git clone https://github.com/pengchengneo/Claude-Code.git
cd Claude-Code

2. Prepare the Build Environment

You’ll need Bun and Node.js. Bun is the official JavaScript runtime used and the primary build tool.

  • Install Bun: Open your terminal and run: curl -fsSL <https://bun.sh/install> | bash After installation, restart your terminal or run source ~/.bashrc to make the command available.
  • Verify Installation: bun --version # Should output something like 1.2.0 or higher

3. Install Dependencies and Run

Navigate to the project directory, install dependencies, and run the source code directly:

# Install project dependencies
bun install

# Run source code directly (development mode)
bun run dev

To verify the build succeeded, you can check the version:

bun run version

✨ Advanced: Enable Hidden Features

One of the most interesting aspects of the leaked source code is that it reveals numerous features Anthropic hasn’t officially released yet, such as BUDDY (virtual pet), KAIROS (persistent assistant), ULTRAPLAN (cloud-based planning), and others.

These features are controlled by compile-time feature flags. The default build process does not include them. To experience them, you’ll need to modify the build scripts to manually enable these flags during compilation.

Taking BUDDY as an example, one approach is to modify scripts/build/build-executables.js or related build configuration files, injecting environment variables or custom constants when invoking the bun build command. For instance, you might see code like this in the build commands:

// This is an example; exact modification location depends on the actual source
define: {
  "process.env.FEATURE_BUDDY": "true",
  "process.env.FEATURE_KAIROS": "true",
}

Since build scripts may vary slightly between repositories, the most direct method is to search the source code for keywords like feature( or __DEV__ to locate where feature flags are defined, then forcefully set them to true.

Risk Warning: These hidden features are incomplete or insufficiently tested internal builds. Enabling them may cause unexpected behavior. Additionally, Anthropic is issuing DMCA takedown requests against these backup repositories—the repository you find may become inaccessible at any time. Consider forking a copy to your own account for archiving and research.


📝 Summary

OptionAdvantagesBest For
Run Pre-built BinaryReady to use, no build environment neededUsers who just want to experience the tool without technical hassle
Run from SourceCan read, modify, and debug the codeDevelopers, those wanting to learn the architecture
Enable Hidden FeaturesExperience unreleased features like BUDDY, KAIROSAdvanced users, curious explorers

Video about Claude Code Leaked

Related Sections of Video:

💥 The Leak Itself — What Happened and How

An Anthropic engineer accidentally bundled a JavaScript source map alongside a public build of Claude Code. Source maps are developer tools designed to map minified code back to its human-readable form. Once the file was in the wild, anyone with basic tooling could reconstruct the full TypeScript codebase. Anthropic moved quickly to delete the repository, but the internet’s archiving reflex — cloning, forking, mirroring — had already spread copies far and wide. Importantly, no model weights, API credentials, customer data, or proprietary AI training secrets were exposed — only the application harness code that makes Claude Code function.

🚀 Unshipped Features Revealed in the Source Code

Perhaps the most consequential part of the leak was not the code itself, but the hidden feature flags set to false for public builds. These exposed Anthropic’s active development roadmap:

  • Chyros — A persistent background agent that monitors GitHub repositories, runs autonomously without user input, and can be pinged from anywhere to report status or begin a task.
  • AutoDream — A background consolidation agent that runs during idle periods, reviewing past context and compressing session memory — analogous to REM sleep for AI.
  • Ultra Plan — A dedicated 30-minute remote planning session powered by a deep reasoning model that fully maps out a complex task before any execution begins.
  • Coordinator Mode — A multi-agent orchestration system where one Claude agent directs a swarm of worker agents, each with its own tools and scratch pad.
  • Real Browser Control — Full browser automation via a browsing agent, not just scripted interactions.
  • Persistent Cross-Session Memory — Memory that accumulates across sessions rather than resetting.
  • Voice Mode — Real-time voice chat with AI agents, similar to capabilities seen in other frontier AI tools.
  • Agent Scheduling / Cron Jobs — The ability to trigger agents on a timer or recurring schedule.
  • New Commands — Including /advisor (a second model reviewing Claude’s outputs), /bugHunter, /goodClaude, and the intriguing /teleport (suspected to handle session switching).
  • Agentic Crypto Payment Protocols (x42) — References to infrastructure enabling Claude agents to handle crypto transactions autonomously.
  • Mythos / Capibara Model — The upcoming next-generation model, codenamed Capibara, confirmed in the code alongside references to future model generations.

🐾 The Hidden Virtual Pet System (/buddy)

In a lighter but surprisingly elaborate easter egg, the source revealed a complete virtual pet (Tamagotchi-style) companion system hidden inside Claude Code. The /buddy command would spawn a personal ASCII companion tied to your user ID, chosen from 18 species including duck, capibara, dragon, ghost, axolotl, and a mysterious creature called “Chonk.” The system features a full gacha rarity mechanic (common through legendary, with a 1% legendary drop rate), shiny variants, and cosmetic accessories like propeller hats, crowns, and wizard hats. Each pet has three stats: Debugging, Chaos, and Snark. The timing of the leak — right before April 1st — suggests this was being prepared as an April Fools feature. Notably, to avoid conflicts with the model codename “Capibara,” Anthropic had encoded all 18 pet species names in hexadecimal to bypass internal build scanners.

⚖️ The Legal Gray Zone: AI-Assisted Clean Room Engineering

One of the most thought-provoking dimensions of this story is what happened after the initial fork. A developer took the leaked TypeScript codebase and, using OpenAI’s Codex, converted the entire thing into Python — functionally recreating Claude Code in a different language within roughly 12 hours. This raises a question that existing copyright law is not well-equipped to answer: if AI can rapidly transform stolen code into a functionally identical but syntactically different product, has copyright been circumvented? Wes Roth frames this as AI-accelerated “clean room engineering” — historically a costly, slow legal strategy used by companies to reverse-engineer competitors’ products without direct code copying. With AI, this process now takes hours. No clear legal precedent exists yet, and Roth predicts this will be a major area of future litigation.

🧠 Other Notable Discoveries:

  • Frustration Detection — Claude appears to monitor the language users use to gauge whether they are losing patience, triggering some form of adaptive behavior.
  • Undercover Mode — Flagged by an AI researcher as anomalous; the community is still investigating what it does.
  • 187 Loading Spinner Verbs — Claude’s “thinking” spinner text draws from a pool of 187 unique verb phrases.

Conclusion & Key Takeaways

The Claude Code source leak is one of the most significant accidental disclosures in recent AI history — not because of what was stolen, but because of what it revealed about where AI coding tools are heading. Anthropic’s roadmap is now largely public: persistent memory, autonomous background agents, multi-agent orchestration, deep planning modes, and agentic crypto payments all point to a vision of AI that runs continuously, plans deeply, and acts independently.

Key Takeaways:

  1. The leak exposed Anthropic’s full near-term roadmap — background agents, persistent memory, voice mode, and multi-agent coordination are all in active development.
  2. No sensitive data was compromised — model weights, credentials, and customer data were not part of the leak.
  3. AI is accelerating legal gray zones — rapid cross-language code conversion using AI tools challenges how intellectual property law will work going forward.
  4. The OpenClaw effect is real — many of the leaked features appear to be direct responses to the capabilities popularized by OpenClaw, indicating how competitive pressure shapes AI product development.
  5. The community is still digging — the “undercover mode” and other deep features remain under active investigation.

📚 Related References

Leave a Reply

Your email address will not be published. Required fields are marked *