TL;DR

AI-assisted commits leak hardcoded secrets at 2x the baseline rate, and Claude Code specifically clocks a 3.2% leak rate. GitGuardian pulled those numbers from scanning 28.65 million new secrets pushed to public GitHub in 2025, a 34% year-over-year jump and the steepest in their State of Secrets Sprawl 2026 report’s history. The report dropped on March 17, 2026.

I checked my own repos after reading this. Found two exposed staging API keys that had been sitting in a config file for three months. They came from a PR I’d accepted from an AI-generated refactor.

If you’re using AI coding tools (and you probably are), five pre-commit hooks plus a .env.example pattern block the majority of these. I walk through the whole setup below.

What the 2025 Data Shows

Here’s what GitGuardian found after scanning public GitHub activity throughout 2025:

28.65M
Secrets leaked to public GitHub in 2025
+34%
YoY growth, steepest on record
2x
AI-assisted commit leak rate vs baseline
3.2%
Claude Code leak rate (1.5% baseline)

For context, the previous year-over-year increase was around 18-20%. Something changed in 2025, and the obvious variable is the mass adoption of AI coding tools. GitHub Copilot crossed major adoption milestones, Claude Code launched, Cursor exploded in popularity, and a dozen other tools entered the market.

More code is being written faster than ever, and more secrets are leaking because of it.

Why AI Makes It Worse

The mechanics here aren’t complicated once you think about them. AI coding tools leak secrets for a few specific reasons:

1. Context window stuffing. When you feed an AI assistant your entire project context — config files, .env examples, docker-compose files — it absorbs those patterns. Then it reproduces them. If your example config has API_KEY=sk-abc123-real-key-here, the AI will generate code that follows the same pattern with inline credentials instead of environment variable references.

2. Autocomplete confidence. AI tools generate code that looks correct and complete. When Copilot fills in an API initialization block, it often includes a placeholder that looks like a real key. Developers accept the suggestion, test it, realize they need a real key, paste one in, and forget to swap it back to an env var before committing.

3. Speed kills (your security). Developers using AI tools ship code faster. That’s the reason people adopt them. But faster shipping means less time reviewing each line. A developer who manually types an API call is more likely to think “I should use an env var here” than one who accepts a 40-line autocompleted block.

4. Generated test files. AI assistants love generating test files with hardcoded values. “Here’s a test for your Stripe integration,” complete with a test API key that’s formatted exactly like a real one. Sometimes it is a real one from the training data.

The GitGuardian data backs this up. The 2x multiplier comes from AI tools optimizing for speed and completeness, not from malice.

The Claude Code Problem

I want to call out the Claude Code number specifically: 3.2% of commits involving Claude Code contained leaked secrets. That’s higher than the general AI-assisted average.

Probably because Claude Code operates with more autonomy than something like Copilot. It generates entire files, refactors whole modules, and creates complete configurations. More autonomous code generation means more surface area for secrets to slip through.

I use Claude Code via the Anthropic API for a lot of my workflow (see my full comparison of Cursor, Claude Code, and Windsurf), so I decided to check my own repos. After reading the report, I ran git log --author filtered for commits where I’d integrated Claude Code output and manually audited them. That’s when I found those two staging API keys I mentioned.

3.2% sounds small until you do the math. If you make 30 commits a day with Claude Code assistance — not unusual during a heavy sprint — that’s roughly one secret leak per day. Over a week, that’s five or six exposed credentials sitting in your git history. Even if you delete the file, the secret lives in the commit history until you force-push a rebase or use something like BFG Repo-Cleaner.

The Industry Response

The big players are responding.

OpenAI launched Codex Security, an AI-powered security agent specifically designed to find vulnerabilities in codebases. They published results from scanning 1.2 million commits: 792 critical findings and 10,561 high-severity issues. That’s a lot of problems caught, but it’s also a reactive approach. The secrets already hit the repo before Codex Security flagged them.

Anthropic launched Claude Code Security for vulnerability scanning. Same idea: use AI to find the problems that AI helped create. It’s a reasonable move, but I’d rather prevent the leak than detect it after the fact.

Both tools are useful but neither solves the root cause.

The root cause is that we’re generating code faster than we can review it. One developer on Hacker News described AI coding assistants as “mentally exhausting” because you’re constantly evaluating AI-generated code, maintaining context across a session, and staying vigilant for errors. Security review is just one more thing on that pile, and it’s often the first thing that slips.

Fixing Your Workflow

Here’s what I’ve set up in my own projects after reading the report. The tooling itself is years old, I was just too lazy to wire it up before.

1. Pre-commit hooks with detect-secrets

Install Yelp’s detect-secrets and run it as a pre-commit hook:

pip install detect-secrets
detect-secrets scan > .secrets.baseline

Add to your .pre-commit-config.yaml:

repos:
  - repo: https://github.com/Yelp/detect-secrets
    rev: v1.5.0
    hooks:
      - id: detect-secrets
        args: ['--baseline', '.secrets.baseline']

This catches secrets before they enter your git history. It’s the single highest-impact change you can make.

2. A proper .gitignore

Your .gitignore should block the obvious stuff. Here’s what I add to every project now:

# Environment and secrets
.env
.env.*
!.env.example
*.pem
*.key
*.p12
*.pfx

# Cloud credentials
.aws/
.gcp/
credentials.json
service-account*.json
*-credentials.json

# IDE and tool configs that might contain tokens
.vscode/settings.json
.idea/

3. Git-secrets as a second layer

AWS’s git-secrets tool catches AWS-specific credential patterns:

brew install git-secrets
cd your-repo
git secrets --install
git secrets --register-aws

This prevents committing anything that matches AWS key patterns. I run both this and detect-secrets because they catch different things.

4. Environment variable discipline

Every project gets a .env.example with dummy values and comments. When I use AI tools, I include this instruction in my system prompt or project context:

NEVER use hardcoded API keys, tokens, or credentials.
Always reference environment variables.
Use .env.example for documentation, never .env for actual values.

It won’t fix the problem completely, but it reduces the 3.2% leak rate significantly because the AI sees the pattern you want it to follow.

5. Post-push scanning

Set up GitGuardian’s free tier or GitHub’s built-in secret scanning on your repos. Yes, this is the “detect after the fact” approach, but it’s your safety net. Pre-commit hooks can be bypassed (accidentally or with --no-verify). A server-side scanner can’t.

The Bigger Picture

Simon Willison coined the term “agentic engineering” to describe what we’re all doing now, working with AI agents as coding partners rather than just using them as autocomplete. I think the framing is right and it points to the core tension.

When you pair-program with a human, you both understand the implicit rules. Don’t hardcode secrets. Don’t commit to main. Write tests. Experienced developers internalize those social norms over years. AI agents work off training-data patterns instead, and a lot of that training data contains hardcoded secrets. GitHub’s new policy to train Copilot on user interaction data adds another dimension to this concern.

The mental exhaustion that developers report is really about being the security boundary between a fast pattern-matching system and your production infrastructure. Every time you accept an AI-generated block of code, you’re implicitly signing off on its security properties. At 2x the baseline leak rate, we’re not doing that review well enough.

Keep using AI tools, the productivity gains are real. But treat AI-generated code the way you treat code from a junior developer who’s never heard of environment variables, and set up automated guardrails so verification doesn’t depend on your attention span at 4 PM on a Friday. Leaked credentials are one failure mode, but agents with production access can do far worse — four recent database wipes trace back to the same guardrail failures.

And leaked secrets aren’t just a local problem. When stolen tokens end up in the wrong hands, they fuel supply chain attacks like the axios npm compromise that hit 100M weekly downloads. On the flip side, AI is also getting better at finding the vulnerabilities those leaks enable: Anthropic’s red team recently used Claude to find 500+ zero-days in production open-source code.

FAQ

Are AI coding tools directly inserting real API keys from training data?

Sometimes, yes. There are documented cases of AI models reproducing real keys from their training data. But the more common scenario is that the AI generates a pattern that leads developers to paste in real keys. The tool creates a slot; the developer fills it with a live credential.

Is GitHub Copilot safer than Claude Code for secret leaks?

The GitGuardian report shows Claude Code at 3.2%, which is above the 2x average for AI-assisted commits generally. Copilot likely has a lower per-commit rate because it generates smaller code chunks (single-line or block completions vs. entire files). But more completions means more opportunities, so the aggregate risk might be similar.

Can I use AI tools to find secrets that AI tools leaked?

Yes, and that’s exactly what OpenAI’s Codex Security and Anthropic’s Claude Code Security are designed for. Codex Security found 792 critical issues and 10,561 high-severity problems across 1.2M scanned commits. These tools are useful as a detection layer, but they’re reactive. Pre-commit hooks are still your first line of defense.

What’s the fastest thing I can do right now to reduce my risk?

Install detect-secrets and add it as a pre-commit hook. It takes five minutes and catches the majority of hardcoded secrets before they hit your repo. After that, add a .env.example file and include explicit instructions about env vars in your AI tool’s system prompt or project context.

Should I audit my existing git history for leaked secrets?

Yes. Run detect-secrets scan across your repo and also check your git history with git log -p | detect-secrets scan. If you find live credentials, rotate them immediately. Don’t just delete the file. The secret is in your git history forever unless you rewrite it.

Sources

Bottom Line

The 2026 GitGuardian report puts a hard number on something many of us suspected: AI coding tools are making secret leaks worse. 28.65 million leaked secrets, a 34% year-over-year spike, and a 2x multiplier on AI-assisted commits.

The fix is to build your workflow around the assumption that AI-generated code will contain secrets if you let it. Pre-commit hooks, proper .gitignore templates, environment variable discipline, and post-push scanning. Free tooling, fifteen minutes to wire up.

I’d rather spend those fifteen minutes on pre-commit hooks than explain to my team why our AWS keys are on GitHub. It beats rotating compromised credentials at 2 AM.