O-Matic Research Lab

Cross-Platform Extraction Drift: What AI Memory Gets Wrong

Case Study

9 documents extracted. All 9 invalid. Only a human audit caught it.

When you move AI-assisted work from one platform to another, what travels with it? The answer — discovered the hard way — is what the AI remembers, not what’s real. And those are very different things.

The Project

A brand and web design project spanning roughly 8 months, started on OpenAI with ChatGPT. When the work migrated to Claude, a structured extraction prompt pulled 9 markdown files — brand specs, site architecture, content inventory. The complete knowledge base, neatly formatted.

It was thorough, well-organized, and completely wrong.

What the Extraction Missed

  • The site had migrated from flat HTML to WordPress
  • An entire revenue service page had been added — three pricing tiers, live and generating income
  • Consulting content had been deliberately rewritten to a holding page
  • Two blog posts had been published
  • Portfolio categories had been renamed and restructured

All 9 documents were effectively invalid.

Why It Happened

1. AI memory is a snapshot, not a feed. The operator built features, published content, and restructured the site without AI assistance. The AI didn’t know because it wasn’t watching.

2. Cross-platform continuity doesn’t exist. Work done in OpenAI has zero automatic continuity in Claude. There’s no API, no sync, no handoff protocol. Just copy and paste with human judgment in between.

3. Extraction captures memory, not reality. The prompt asked the AI to export what it knew. It did — accurately reporting its own outdated understanding. The extraction was technically flawless. The data was fundamentally stale.

Who Caught It

The operator. Not the AI.

When the Closed Factory was activated, the operator initiated a live site audit — a systematic comparison of extracted documentation against the actual running website. Every discrepancy surfaced within the first pass.

The AI couldn’t catch this. It had no reference point beyond its own memory. Only the human could verify ground truth.

The Fix

7Docs Rewritten
11New Items Catalogued
0Inflated Claims

The Second Catch: Governance Drift

A Probot health check revealed project instructions were stale — written one day before a major factory architecture update. They referenced retired skills, listed wrong agent versions, missed Smith entirely. Garbage in, confused analysis out. The operator caught it by recognizing the confused output and tracing it to the root cause.

“Neither party trusts the other blindly. That’s the Closed Factory promise — not autonomous AI, but disciplined partnership.”
← All Publications How the Factory Works →

O-Matic Research Lab

Building the AI Operating System. The layer on top of AI — your agents, your governance, your factory.