Concepts
Sourced concept cards — productivity, psychology, systems, AI.
35 concepts
Poiesis, Praxis, Phronesis
Aristotle, Nicomachean Ethics (~350 BC)
Aristotle's three modes of human action. AI automates poiesis (making). Praxis (acting with meaning) resists. Phronesis (judging under uncertainty) becomes the central skill of the agent economy.
Price Inelasticity
Alfred Marshall, 1890 — Principles of Economics, Macmillan
When demand doesn't respond proportionally to price — a 2% drop in oil supply can trigger a 20-30% price increase. Applies equally to human skills facing AI substitution.
Silent Workforce Reduction
Observed empirically in SMEs and large companies, 2023-2026
AI doesn't only eliminate jobs through announced layoffs — it reduces headcount through directed attrition: not replacing departures, with one position gradually absorbing the work of several.
Desire as Unhappiness
Naval Ravikant, The Almanack of Naval Ravikant (2018)
Every desire is a contract signed with yourself to be unhappy until you get what you want. Not a call to suppress desire — but to recognize it as chosen suffering.
Happiness as Skill
Naval Ravikant, The Almanack of Naval Ravikant (2018)
Happiness isn't an inherited state or a destination — it's a learnable personal skill. It's cultivated through subtraction: removing the sense that something is missing.
Identity Shedding
Naval Ravikant, The Almanack of Naval Ravikant (2018)
Questioning and abandoning pre-packaged identities that filter perception. To think honestly, speak without identity. If all your beliefs align into a coherent package, be very suspicious.
Judgment Over Time
Naval Ravikant, The Almanack of Naval Ravikant (2018)
The ideal economic position: being paid for decision quality, not time. With leverage, 10% better judgment can produce 1000x more results.
Long-Term Games
Naval Ravikant, The Almanack of Naval Ravikant (2018)
All returns in life come from compound interest. Trust and reputation compound over decades — play with the same honest people repeatedly.
Permissionless Leverage
Naval Ravikant, The Almanack of Naval Ravikant (2018)
Code and media are forms of leverage that require no one's approval — zero marginal cost, infinite scale. The only reason one person can have the impact of a thousand-person company.
Small World Network
Duncan Watts & Steven Strogatz, Nature (1998) — Milgram experiment (1967)
In a well-structured network, any node is reachable from any other in ≤6 hops — thanks to hubs that create shortcuts between distant clusters.
Specific Knowledge
Naval Ravikant, The Almanack of Naval Ravikant (2018)
Knowledge that cannot be taught but can be learned — it emerges from the intersection of temperament, environment, and personal obsession. No one can compete with you on being you.
The Agentic Era
Gartner (2025), OpenAI Operator, Anthropic Claude Code (2025-2026)
The shift from task-by-task LLMs to autonomous agents executing complete workflows. 2026 is the inflection point — this scale change invalidates classical automation studies.
Work as Play
Naval Ravikant, The Almanack of Naval Ravikant (2018)
The identification signal of Specific Knowledge: what feels like play to you but looks like work to others. Real winners are so addicted they keep playing even when rewards diminish.
AI Identity Threat
Springer / MIT (2022) — Scientific Reports Nature (2025)
AI doesn't destroy jobs first — it erodes the tasks that gave those jobs meaning. This phenomenon, documented by MIT, undermines professional identity before employment itself is threatened.
Codifiability Threshold
Frey & Osborne (2013) / Arntz-Gregory-Zierahn OECD (2016) — synthesis
What determines whether a task shifts to automation is not its difficulty, but its codifiability — can its rules, patterns, or sequences be extracted and taught to a machine? Four technological breakthroughs have successively raised this threshold.
Poiesis vs Praxis
Aristotle, Nicomachean Ethics (~350 BC)
Aristotle distinguished two modes of human action: poiesis (producing a result) and praxis (acting for the meaning of the act itself). AI automates poiesis. What resists is praxis — but the boundary shifts with every technological breakthrough.
So-So Technology
Daron Acemoglu & Pascual Restrepo, NBER (2019)
A 'so-so' technology automates human work without creating enough new roles to compensate. Acemoglu & Restrepo's (2019) concept to distinguish technologies that enrich the economy from those that simply redistribute value toward capital.
Utility vs Meaning
Viktor Frankl, 1946 — applied to work in the AI era
Not all tasks are equal in the face of AI. Some have utility value — they can be optimized, delegated, automated. Others have meaning value — they are intrinsically human and resist automation. Two economies are separating.
Commitment Device
Thomas Schelling, 1978 — Harvard University
A commitment device is a constraint you voluntarily impose on yourself in advance to neutralize your own future weakness of will — before temptation strikes.
Positive Friction & Dark Patterns
Harry Brignull, 2010 / UCL London — Cass Sunstein, 2022
Positive friction is an obstacle voluntarily added by the user to protect themselves from their own biases. Dark patterns are the opposite: friction deliberately deployed against the user.
Adapted ICE Score
Sean Ellis, 2009 — adapted by Thomas Silliard
The ICE Score (Impact, Confidence, Ease) is an idea prioritization framework. This adaptation replaces Confidence with Clarity and Ease with Difficulty for solo use without user data.
Context Engineering
Andrej Karpathy, 2024 — popularized by Oussama Ammar, 2026
Building explicit data and context 'pipes' to achieve results no one-shot prompt can produce. Context engineering goes beyond prompt engineering.
Definition of Done
Ken Schwaber & Jeff Sutherland, Scrum — 1990s
A task is done when a verifiable, objective condition is met — not when you've 'made progress on it'. This explicit criterion closes mental loops and reduces lingering unfinished work.
Ego Depletion
Roy Baumeister, 1998 — Case Western Reserve University
Willpower is a resource that gets spent with each decision. But the science on the exact mechanism is more nuanced than widely believed.
Explicitation
Michael Polanyi, 1958 — popularized in the AI context by Oussama Ammar, 2026
The key new skill for working with AI: defining each thought with near-surgical precision. AI can't interpret vague judgments — it needs explicit criteria.
Gall's Law
John Gall, 1975 — Systemantics
Every complex system that works evolved from a simple system that worked. No complex system designed from scratch ever works.
Goodhart's Law
Charles Goodhart, 1975
When a measure becomes a target, it ceases to be a good measure. Optimizing for an indicator corrupts it.
Implementation Intentions
Peter Gollwitzer, 1999 — New York University
Deciding in advance 'when X, I will do Y' multiplies task completion rates by 2 to 3 compared to a simple intention. The context triggers the action automatically.
Learning Through Play
Johan Huizinga, 1938 — popularized in the AI context by Oussama Ammar, 2026
Mastering a technology through intrinsically motivating projects before applying it to business. The beginner's advantage: no old habits to unlearn.
Parkinson's Law
Cyril Northcote Parkinson, 1955 — The Economist
Work expands to fill the time available for its completion. Without a defined time constraint, a task takes as long as you let it.
Test Architecture
Oussama Ammar, 2026 — building on Kent Beck (TDD, 1999) and W. Edwards Deming (1982)
When AI generates code, the real human skill is designing rigorous test systems. Humans no longer review code — they design the trials that code must pass.
The AI Thinking Pivot
Oussama Ammar, 2026
AI doesn't change how we work — it changes how we think about work. It's not a process evolution, it's a mental paradigm shift.
The Curse of Knowledge
Elizabeth Newton, 1990 — Stanford
Once you know something, you can no longer imagine not knowing it. You systematically overestimate what others understand without even realizing it.
The Feynman Technique
Richard Feynman, ~1950
To truly understand something, try to explain it simply. The gaps in your explanation reveal the gaps in your understanding.
The KISS Principle
Kelly Johnson, 1960s — Lockheed Skunk Works
Keep It Simple, Stupid. A system should be as simple as possible — unnecessary complexity is a liability, not a sign of sophistication.