Confidential · Investor Brief
Seed round · Open · Q2 2026

The AI moat
is not code.
It's invention.

Gargantua Labs discovers novel primitives inside frontier language models, patents them at the USPTO before public disclosure, and licenses the IP to the companies that need it. Twenty-five claims filed. Five model families replicated. A live product, DeltaWrite, already inside the portfolio.

0
USPTO claims filed
0 families
Replicated across Qwen · Llama · Mistral · Phi · Gemma
0×
Faster than LoRA on knowledge injection
~250 ms
Dispatch latency at N=500 facts
The investor thesis

Three shifts make IP the last durable moat in AI.

Agentic coding is collapsing the cost of building software. Model weights are becoming commodity. What stays scarce — and defensible — is the primitive underneath: the mechanism itself, filed and owned.

01 / Timing

Code is free. Invention is not.

Every AI-native startup is now a clone target. The primitive underneath — the closed-form slot build, the dispatch geometry — can't be vibe-coded. We file before disclosure and the priority date locks the world out.

Window: 12–24 months
02 / Leverage

Weights churn quarterly. Patents run twenty years.

A fine-tune is obsolete the day GPT-6 ships. A granted patent compounds across every model that replicates the capability — Qwen, Llama, Mistral, whatever comes next. Our assets get more valuable as the ecosystem scales.

20-year compounding
03 / Distribution

The foundation labs are our distribution.

Every enterprise running an LLM is a potential licensee. We don't sell hosting. We don't need a go-to-market army. Open-source and frontier families do the scaling; we collect on the mechanism.

Zero CAC on adoption
Live demo

Write new knowledge into a frozen model. In one forward pass.

Not fine-tuning. Not RAG. Not in-context. DeltaWrite installs a fact directly into the weight matrix with no gradient computation — and it survives across prompts, sessions, and model reloads. Click a question. Watch the base model fail. Inject. Watch it answer.

DeltaWrite / PITWM

The primitive that shouldn't exist.

Rank-1 perturbation to a single projection matrix, built from a closed-form forward pass. No backprop. No training loop. Under oracle dispatch we hit 100% recall across 500 facts with zero control leaks. On paraphrased queries, end-to-end: 94%.

Model
Qwen2.5-7B-Instruct (frozen)
Layer
27 · mlp.down_proj
Write cost
~112 ms · one forward pass
Persistence
Cross-session, serialisable, reversible
Patent
USPTO · filed 2026-04-11 · 25 claims
gargantua@deltawrite · qwen2.5-7b-instruct · layer 27
● Base model · no injection
Traction — experimentally verified

Numbers we will show under NDA.

Every figure here is reproducible from the research repo. All ablations, seeds, and control prompts are pre-registered. Full protocol available to qualified investors.

01
0
USPTO claims, filed
Provisional · April 2026. Covers the primitive, the mechanism, and the system-level use. Non-provisional window open through April 2027.
02
0%
Oracle-gate recall, N=50 facts
Qwen2.5-7B-Instruct · paraphrased queries · seq_LL +23.6 nats. Pre-registered hypothesis. Re-run on Llama, Mistral, 14B — all pass.
03
0/500
Capacity stress · zero control leaks
Five hundred facts resident under a single dispatcher. 99.8% routing accuracy, no false fires on generic queries. ~250 ms per query.
04
0%
End-to-end recall, learned dispatch
N=50 Acme KB · paraphrased queries · MiniLM dispatcher. Full pipeline, no oracle. Transfers unchanged to Llama-3.1-8B and Mistral-7B.
Why now

The IP window closes once everyone sees it.

“In a world where anyone can build anything, the only thing left to own is the underlying invention — and the claims that surround it.”

— Founding thesis, Gargantua Labs

2022ChatGPT. Models become commodity inputs. Application layer explodes.Commodity
2024Frontier labs converge on the same architecture. Feature parity within quarters.Convergence
2025Agentic coding collapses the cost of replication. Code defensibility ≈ zero.Collapse
2026Gargantua files on the mechanism — before disclosure. Priority locked.We are here
2027+The primitive ships everywhere. Every deployment is a licensable surface.Harvest
The moat — visualised

Why this compounds where fine-tunes don't.

Every license pays for the next filing. Every filing widens the moat.

Patent portfolio — compounds with ecosystem adoption
Fine-tuned adapter — obsolete at next base model
Code-only SaaS — commodity, no defensibility
Capital flywheel

Each license funds the next invention. Each new invention is a separable asset. Each new asset widens the claim surface. Investors buy into the flywheel, not any single patent.

The operating model

Invent. Patent. License. Repeat.

Three steps. Replicable cadence. A clear revenue moment at step three.

I

Invent

We look at frontier models the way a mechanical engineer looks at a gearbox. We find the primitive nobody has filed on. We prove it generalises across model families before we move.

Current: 2 primitives
In flight: 3 more by EOY 2026
Time to validate: ~6 weeks
Model families tested: 5 (Qwen, Llama, Mistral, Phi, Gemma)
II

Patent

We file provisional before any public disclosure. The claim drafts cover the primitive, the mechanism, and the system-level use. Non-provisional conversion locks in 20-year enforceability.

Filed: USPTO 2026-04-11
Claims: 25
Priority date: LOCKED
Non-provisional window: open through 2027-04-11
III

License

We license the IP — non-exclusive, exclusive, or field-of-use. The base model stays neutral. Licensees run deployments inside their own stack, with their own data, under their own compliance.

Structure: upfront + royalty
Scope options: field-of-use, exclusive, non-exclusive
Pipeline: 3 LOI in progress
First close target: Q3 2026
Who's behind it

A small lab. By design.

Invention density per headcount is the metric we optimise. We keep the team small, the scope narrow, and the output legible.

N

Nathan Peterson

Founder · Principal Inventor

Sole inventor on the current filings. Background in mechanistic interpretability and applied transformer research. Runs the invention cadence.

Specter Foundation

Patent Counsel

External · AI specialist firm

Senior-partner-led drafting and prosecution. Specialises in software and ML claim strategy. Engaged on all active and planned filings.

Engaged · 2026
+

Research hire #2

Opening post-seed

Reserved for a post-seed technical hire focused on the next primitive in the pipeline. Full profile shared with committed investors.

Post-close
Objections — answered

The questions every VC asks.

Post-Alice jurisprudence narrowed the field, not closed it. Our claims are drafted around concrete, non-abstract transformations on specific weight matrices at specific layers — exactly the shape of claim that survives §101. Licensing is the primary enforcement path, and licensees pay to avoid enforcement risk, not to test it.
Nothing — which is why we file broadly. The primitive, the mechanism, and the system-level use are three distinct claim surfaces. An "invent-around" on one typically infringes another. Combined with our cross-family empirical evidence, the prior-art position is strong by filing date.
Initial licenses are. Portfolios compound out of lumpiness. We're not selling one patent — we're selling access to a growing set of filings that every new frontier model accidentally replicates. The flywheel is in the portfolio, not the single asset.
Optionality. A product company commits to one deployment shape. A lab retains the right to license every shape — enterprise on-prem, hyperscaler-embedded, sovereign-compute — simultaneously, in parallel, with no cannibalisation risk. If a flagship product opportunity emerges, we spin it. Otherwise we stay light.
Three things, in order: (1) the non-provisional conversion and international PCT fees on the current portfolio, (2) the second research hire to parallelise the invention pipeline, (3) licensing operations — counsel, term-sheet velocity, and the outbound motion into target accounts. Full use-of-funds breakdown in the investor deck.
IP-first licensing companies in semiconductors (ARM, Rambus, InterDigital) and in ML specifically (early Qualcomm, more recent HP patent-licensing spin-outs) give the template. We're betting the same structural dynamics hold for AI primitives — earlier in the cycle and with a broader licensee base.
Seed round · taking first-meeting requests

Invest in the lab.
Or license the IP.

Capital raise is live. Licensing slate opens Q3. We're prioritising a short list this quarter; first meetings order the cap table and the licensee queue. Whichever side you're on, the first step is a private one — NDA standard.

● Priority date · 2026-04-11● USPTO · 25 claims● NDA standard
Or reach us direct
Investors
invest@gargantua.labs
Licensees
license@gargantua.labs