WhatA marketplace that connects job seekers with employees at target companies willing to make referral introductions, with bounties paid on successful hires.
SignalMultiple commenters independently converge on the same insight: the formal application process is broken and the single best strategy is getting an internal referral — but many people, especially those who are introverted or have smaller networks, have no way to access this channel.
Why NowReferral hiring has always been effective, but the application-spam crisis caused by AI cover letters and one-click apply has made cold applications nearly worthless, dramatically widening the gap between networked and non-networked candidates.
MarketJob seekers ($50-500/intro willingness to pay) and employers (referral bonuses of $1K-10K already budgeted); TAM in the billions across the recruiting value chain; LinkedIn is the obvious incumbent but its referral path is passive and buried.
MoatTwo-sided network effect — more referring employees attract more job seekers and vice versa; trust/reputation scores on referrers create switching costs.
Open Source License Compliance Automation PlatformP6/10An automated tool that scans codebases for open source dependencies, detects license obligations, and generates compliance reports to prevent accidental violations.
Open Source Maintainer Monetization and Protection PlatformC5/10A platform that lets open source maintainers enforce license terms, track commercial usage of their projects, and collect fair compensation from companies using their work.
AI Code Provenance and License Attribution EngineC7/10A developer tool that traces the origin of every code snippet generated or suggested by AI, flagging license-encumbered code before it enters a codebase.
AI Agent Compliance Testing and Verification PlatformP6/10A testing framework that systematically verifies whether AI coding agents actually follow user instructions, flagging cases where agents ignore explicit directives.
LLM Guardrail and Behavioral Steering InfrastructureC7/10An API layer that sits between AI agents and users, enforcing hard constraints on agent behavior — like a firewall for AI actions that prevents agents from overriding explicit user instructions.
AI Agent Observability and Context Audit ToolC6/10A debugging and transparency tool that captures and displays the full context an AI agent is operating with — system prompts, file contents, conversation history — so users can understand why an agent behaved unexpectedly.