An accountant spends 60% of their time on data entry and account reconciliation. These are utility tasks — their value lies in the output, not in the fact that a human does them. AI takes those. But the remaining 40% — interpreting an anomaly, advising an executive in crisis, understanding what a number means for this company at this moment — these tasks have meaning. They can’t be delegated.
Origin
Viktor Frankl, neuropsychiatrist and Holocaust survivor, develops in Man’s Search for Meaning (1946) the fundamental distinction between what has instrumental value (useful for achieving a goal) and what has intrinsic value (meaningful in itself, regardless of outcome).
“Everything can be taken from a man but one thing: the last of the human freedoms — to choose one’s attitude in any given set of circumstances.” — Viktor Frankl
Applied to contemporary work, this distinction takes on new significance with AI: utility tasks — repetitive, measurable, optimizable — are precisely what machines do best. Meaning tasks — contextualization, moral judgment, human connection, non-prescribed creativity — structurally resist automation.
What the Research Says
Automation targets tasks, not jobs
The most common misunderstanding about automation: thinking AI will eliminate professions. Research shows something more nuanced — it displaces specific tasks within jobs.
Frey & Osborne (Oxford, 2013) estimate that 47% of US jobs are at risk. But their method reasons at the level of entire occupations. When the OECD redoes the analysis task by task (21 countries, 2016), the figure drops to 9%.
In 2023, Frey and Osborne themselves published a reappraisal (“Generative AI and the Future of Work: A Reappraisal”): they now refuse to give a headline figure, emphasizing that generative AI also makes jobs more accessible to lower-skilled workers. The IMF (January 2024) estimates 40% of jobs exposed to AI — with the same nuance: exposure does not mean displacement.
The gap between 47% and 9% isn’t a contradiction — it demonstrates that what gets automated is the routine portion of each role. Not the role itself.
David Autor (MIT) has documented this since the early 2000s: automation hollows out a hourglass shape in the labor market. “Middle-skill” jobs disappear. What resists: highly skilled positions (non-routine cognitive) and, more surprisingly, low-skilled but non-routine physical or interpersonal positions — caregiving, food service, security.
AI transfers expertise, it doesn’t destroy it
Brynjolfsson, Li & Raymond (MIT/Stanford, 2023) studied customer support agents using an AI assistant. Result: +14% productivity on average — but gains are concentrated among the least experienced workers (+35%). The most skilled: minimal gains.
What AI does: it transfers tacit expertise from senior workers to novices. It compresses the expertise premium on codifiable tasks. What remains differentiating for experts is precisely what can’t be codified: judgment, relationship, creation.
”So-so” AI: lots of noise, little net value
Daron Acemoglu (MIT) introduces the concept of so-so technology: automation that doesn’t perform much better than humans but costs less. Automated call centers are the paradigmatic example. Result: labor displacement without real productivity gains.
His projection: the real productivity gains from current AI amount to only 0.53% of GDP over 10 years. Not for lack of technical capability — but because current AI substitutes rather than augments.
The Theory
Utility value vs meaning value
| Utility tasks | Meaning tasks |
|---|---|
| Measurable and reproducible output | Output dependent on human context |
| Can be done by anyone (or anything) competent | Require presence, identity, lived experience |
| Value in the output | Value in the who and the how |
| Candidates for automation | Resistant to automation |
Aristotle formulated this 2,500 years ago
The utility/meaning distinction isn’t new. Aristotle already distinguished:
- Poiesis: production oriented toward an external object. Value is in the product. It’s a means to an end.
- Praxis: action whose value resides in the action itself. Caring for someone, deliberating, creating. The activity is its own end.
“Activity has an end in itself, while making has an end as a product separate from the activity.” — Aristotle, Nicomachean Ethics
AI automates poiesis. What it cannot automate is praxis.
Hannah Arendt (1958) offers an even more direct warning: a society freed from routine work by automation might find itself with an empty freedom — people whose entire identity was built around utility work, without the resources to inhabit freedom.
What Actually Resists Automation
A Max Planck Institute study (2024) on medical diagnosis illustrates the boundary concretely: human+AI teams make more accurate diagnoses than either humans alone or AI alone. Their errors are complementary.
But the doctor remains irreplaceable for what AI cannot do: the therapeutic relationship, communicating the diagnosis, shared decision-making under uncertainty, palliative care. This isn’t a consolation — it’s a bifurcation.
| Activity | Resistance | Reason |
|---|---|---|
| Data entry, routine accounting | Very low | 100% codifiable tasks |
| Tier-1 customer support | Low | Scripts + NLP |
| Radiology (image reading) | Low-Medium | AI = better technical precision |
| Medicine: therapeutic relationship | High | Empathy, shared uncertainty |
| Psychological therapy | Very high | Intersubjectivity, trust |
| Teaching (content transmission) | Low-Medium | AI can teach facts |
| Teaching (mentoring, identity formation) | Very high | Role model, presence, individual adjustment |
| Lawyer (legal research) | Low | AI dominates |
| Lawyer (litigation, negotiation) | High | Judgment, persuasion, relationship |
| Grief work, palliative care | Extremely high | Irreplaceable presence |
| ”Average” creativity | Medium | AI beats the average human |
| Exceptional creativity | High | Top 10% humans > AI |
In Practice
Audit your tasks with this filter
For each recurring task: “If an AI did exactly this in my place, would the result be identical for whoever receives it?”
- If yes → utility task. Automate or optimize it, don’t defend it.
- If no → meaning task. Develop it, deepen it, showcase it.
Examples by profession
| Profession | Utility (automatable) | Meaning (resistant) |
|---|---|---|
| Lawyer | Legal research, drafting standard clauses | Litigation strategy, client relations in crisis |
| HR | CV screening, interview scheduling | Final hiring decision, managing human conflict |
| Nurse | Patient data entry, medication reminders | Palliative care support, bedside clinical assessment |
| Developer | Boilerplate generation, auto-documentation | Complex system architecture, understanding business needs |
| Teacher | Factual content delivery, standardized grading | Mentoring, detecting individual blocks, identity formation |
Nuances and Limits
The boundary is not fixed. What is “meaning” today may become “utility” tomorrow.
A 2025 study (Nature Scientific Reports, N=269) shows that passive AI use — copy-pasting generated content without engaging in the work — significantly reduces perceived meaning in work, self-efficacy, and sense of ownership. Active collaboration (human first, AI second to refine) neutralizes these effects.
It’s not AI that destroys meaning — it’s passive delegation. Meaning comes from effort, process, engagement.
A creativity study (100,000 humans vs GPT-4, 2026) shows that AI beats the average human on some measures of divergent creativity — but the top 10% of humans remain far ahead on poetry, novels, and speeches (+80% to +150% linguistic novelty). Mass creativity is being commoditized. Exceptional creativity remains human.
Sources: Frankl, V. (1946). Man’s Search for Meaning · Frey & Osborne (2013). The Future of Employment, Oxford Martin School · Arntz, Gregory & Zierahn (2016). The Risk of Automation for Jobs in OECD Countries · Autor, D. (2019). Work of the Past, Work of the Future, NBER · Acemoglu & Restrepo (2019). Automation and New Tasks, JEP · Brynjolfsson, Li & Raymond (2023). Generative AI at Work, NBER · Max Planck Institute (2024). Human-AI collectives in medical diagnosis · Scientific Reports (2025). Relying on AI at work reduces meaning · Aristotle, Nicomachean Ethics (~350 BC) · Arendt, H. (1958). The Human Condition