With AI generating code, getting better at writing code is pointless. What matters is getting better at defining what code should do — and creating the trials that verify it. The human role shifts from execution to designing constraints.
Origin
Oussama Ammar, entrepreneur and investor, articulates this concept in a 2026 podcast on the future of work with AI:
“I spent much more time thinking about how to test than thinking about how to code. What makes sense is knowing how to build a system where an AI can read the code it produces to improve quality on its own.”
This isn’t a new idea per se — Test-Driven Development (TDD) has existed since the 1990s. What’s new is that this principle becomes the central skill of developers in the generative AI era.
The Theory
TDD as precursor
Kent Beck formalized Test-Driven Development in Test Driven Development: By Example (1999): write tests before code. This inversion was already radical — it forces you to clarify what code should do before thinking about how to do it. With AI, this principle amplifies: AI writes the code, humans write the tests.
Deming and quality through constraints
W. Edwards Deming (Out of the Crisis, 1982): “You cannot inspect quality into a product.” Quality comes from the system of constraints — not from final human inspection. Applied to AI: reviewing AI-generated code line by line doesn’t scale. Defining rigorous tests that constrain the AI from the start does.
Poka-yoke applied to AI
Shigeo Shingo (Toyota, 1960s) developed poka-yoke: mechanisms that make errors physically impossible before they occur. Test architecture is the poka-yoke of AI development — constraints prevent drift before execution.
In Practice
The concrete posture shift:
Before (traditional development):
- Think through the algorithm
- Write the solution
- Test manually
- Fix bugs
With AI + Test Architecture:
- Precisely define what the system should do (expected outputs)
- Write tests that verify these outputs
- Ask AI to code until all tests pass
- AI reads and iterates on its own
The human no longer reviews code — they design the trials. This shift is fundamental: it requires thinking in outputs, not in process.
Concrete example: building an n8n workflow that extracts data from a PDF. Rather than reviewing the workflow code line by line, you define: “For this test PDF, the workflow must return exactly this data table. Any deviation is a bug.” The AI iterates until it passes the test.
Nuances and Limits
This approach requires knowing how to define precise tests — which is non-trivial. Defining the right quality criteria is often harder than writing the code. The concept thus pushes toward developing a new skill: specification precision.
Test architecture works best for systems whose outputs are objectively measurable. For creative tasks (writing, design, strategy), “tests” become evaluation rubrics — subjective but structured.
Sources: Beck, K. (2003). Test Driven Development: By Example. Addison-Wesley · Deming, W.E. (1982). Out of the Crisis. MIT Press · Ammar, O. (2026). Podcast