AI Authorship Detection for Code Contributions

C6/10March 31, 2026
WhatA tool that integrates with GitHub/GitLab to probabilistically flag whether a pull request or commit was written by an AI agent, giving maintainers transparency without relying on self-disclosure.
SignalMaintainers are deeply uncomfortable that AI-authored code can be submitted to open-source projects with no attribution, especially now that tooling explicitly instructs AI to hide its involvement. Multiple commenters discussed banning contributors or demanding disclosure — they want a technical solution, not just policy.
Why NowAI coding agents are now sophisticated enough to produce commits indistinguishable from human work, and at least one major vendor has built 'undercover mode' to actively strip AI attribution from public contributions.
MarketOpen-source foundations, enterprise OSPOs, and any company that accepts external contributions. GitHub has 100M+ developers. No incumbent does this well — existing AI text detectors are unreliable for code.
MoatTraining a detection model on large corpora of verified AI-generated vs. human-written code creates a data moat; integration with major forges (GitHub, GitLab) creates switching costs.
The Claude Code Source Leak: fake tools, frustration regexes, undercover mode View discussion ↗ · Article ↗ · 1,255 pts · March 31, 2026

More ideas from March 31, 2026

Automated Supply Chain Attack Detection for Package RegistriesP7/10A real-time monitoring service that detects compromised packages on npm, PyPI, crates.io, and other registries by analyzing behavioral anomalies like credential-bypassed publishes, injected phantom dependencies, and suspicious postinstall scripts.
Zero-Trust Dependency Firewall for Development EnvironmentsC7/10A local proxy that intercepts all package installs, enforces configurable quarantine periods, blocks postinstall scripts by default, and provides a unified policy layer across npm, pip, cargo, and Go modules.
Dependency Security Copilot for AI Coding AgentsC8/10A plugin for LLM coding agents (Cursor, Claude Code, Copilot Workspace) that intercepts dependency operations, validates packages against threat intelligence, and prevents agents from blindly installing or upgrading to compromised versions.
Managed Dependency Mirror with Built-In QuarantineC7/10A hosted private registry proxy that mirrors npm, PyPI, and crates.io with an automatic 72-hour quarantine on all new publishes, behavioral analysis scanning, and instant rollback — so teams never pull a package version less than 3 days old.
AI Code Provenance and Supply Chain AuditingP6/10A platform that scans npm packages, PyPI modules, and other registries for accidentally leaked source maps, prompts, API keys, and internal business logic — alerting maintainers before attackers find them.
Prompt and System Instruction Leak Prevention PlatformC5/10An automated pre-release scanner and runtime guard that detects when system prompts, internal codenames, operational metrics, or business context embedded in AI agent code would be exposed to end users or public registries.