← Back to blog
Data Lists·12 min read

15 Famous Thought Experiments That Will Break Your Brain

From the Trolley Problem to Schrödinger's Cat — the philosophical puzzles that have shaped ethics, physics, and consciousness for centuries. Each one explained, with the question that keeps people arguing.

Quick Answer

The most famous thought experiments include the Trolley Problem, Schrödinger's Cat, Ship of Theseus, Brain in a Vat, the Chinese Room, Plato's Cave, Mary's Room, the Veil of Ignorance, the Prisoner's Dilemma, the Philosophical Zombie, the Experience Machine, the Infinite Monkey Theorem, Twin Earth, the Ticking Time Bomb, and the Utility Monster. Each is still unresolved.

A thought experiment costs nothing to run. No lab, no funding, no ethics board. Just a question, posed carefully enough to crack open an entire field of human knowledge.

The fifteen experiments below span 2,400 years — from Plato's cave to the philosophical zombie — and every one of them is still unsolved. They've shaped how we think about morality, identity, consciousness, and reality itself. Some of them you'll recognize from pop culture. Others you'll wish you'd encountered sooner.

The Trolley Problem (1967)

Posed by: Philippa Foot Field: Ethics The question: Do you pull the lever to save five but kill one? Is action worse than inaction?

A runaway trolley is heading toward five people tied to the tracks. You can pull a lever to divert it onto a side track — where one person is tied. Do you pull it?

Most people say yes. Then the variant: what if instead of a lever, you have to physically push a large man off a bridge to stop the trolley? Same math. Five saved, one killed. But suddenly most people say no.

The Trolley Problem isn't really about trolleys. It's about whether morality is about outcomes (consequentialism) or actions (deontology). The fact that our intuitions flip depending on how directly we cause the harm suggests our moral reasoning is less consistent than we'd like to believe. Every self-driving car programmer is essentially coding their answer to this question.

Schrödinger's Cat (1935)

Posed by: Erwin Schrödinger Field: Quantum Physics The question: Is the cat alive and dead until observed? What counts as observation?

Schrödinger didn't propose this experiment because he believed a cat could be simultaneously alive and dead. He proposed it because he thought the idea was absurd — and he wanted to show that the Copenhagen interpretation of quantum mechanics led to absurd conclusions when scaled up from subatomic particles to everyday objects.

A cat is sealed in a box with a radioactive atom, a Geiger counter, and a flask of poison. If the atom decays, the counter triggers, the flask breaks, the cat dies. Quantum mechanics says the atom is in a superposition of decayed and not-decayed until measured. Does that mean the cat is in a superposition of alive and dead?

The real question isn't about the cat — it's about the boundary between the quantum world and the classical world. Where does superposition end and definite reality begin? Eighty-nine years later, physicists still disagree.

Ship of Theseus (~100 AD)

Posed by: Plutarch Field: Identity The question: If you replace every plank, is it still the same ship? When does identity change?

The Athenians preserved the ship of the hero Theseus. Over time, rotting planks were replaced with new wood. Eventually every plank had been swapped. Is it still the ship of Theseus?

Thomas Hobbes added a twist: what if someone collected all the old planks and rebuilt the original? Now there are two ships. Which one is the "real" Ship of Theseus?

This isn't ancient trivia. It's the core question behind personal identity (your cells replace themselves every 7-10 years — are you the same person?), intellectual property (when does a remix become a new song?), and corporate law (is a company the same entity after every employee and product changes?).

Brain in a Vat (1981)

Posed by: Hilary Putnam Field: Epistemology The question: How do you know you're not a brain in a vat being fed fake experiences?

Your brain floats in a vat of nutrients. A supercomputer feeds it electrical signals that perfectly simulate reality — sight, sound, touch, everything. From the inside, you'd have no way to tell. So how do you know this isn't happening right now?

The Matrix made this mainstream, but Putnam's actual argument was more subtle. He wasn't trying to scare you — he was trying to prove that the scenario is self-defeating. If you were a brain in a vat, your word "brain" would refer to something in the simulation, not actual brains. So the sentence "I am a brain in a vat" would be false even if you were one. It's a semantic argument, not a sci-fi one.

Chinese Room (1980)

Posed by: John Searle Field: Philosophy of Mind The question: Can a machine truly understand language, or just simulate understanding?

Imagine you're locked in a room. Chinese characters are slid under the door. You have a rulebook that tells you which Chinese characters to send back. To someone outside, it looks like you understand Chinese. But you don't — you're just following rules.

Searle's target was "strong AI" — the claim that a computer running the right program literally understands and thinks. His point: syntax (rule-following) isn't the same as semantics (understanding). A program can perfectly simulate understanding without any understanding occurring.

This experiment is more relevant now than when Searle wrote it. Every time you chat with an AI and wonder whether it "really" understands you, you're in the Chinese Room.

Plato's Cave (~380 BC)

Posed by: Plato Field: Epistemology The question: Are we prisoners watching shadows on a cave wall, mistaking them for reality?

Prisoners have been chained in a cave since birth, facing a wall. Behind them, a fire casts shadows of objects carried past. The shadows are the only reality the prisoners have ever known. If one prisoner breaks free and sees the real world, then returns to tell the others — they won't believe him. The shadows are more real to them than reality.

Plato was making a point about education and enlightenment. But the allegory has aged into something broader. We now live in a world where most of what we "know" comes through screens and algorithms — curated shadows on a digital wall. The question isn't whether the cave exists. It's whether we'd recognize it from the inside.

Mary's Room (1982)

Posed by: Frank Jackson Field: Philosophy of Mind The question: If Mary knows everything about color science but has never seen red, does she learn something new when she finally sees it?

Mary is a brilliant scientist who has lived her entire life in a black-and-white room. She has studied everything there is to know about the physics and neuroscience of color — wavelengths, retinal responses, neural processing. She knows every physical fact about what happens when someone sees red.

Then she leaves the room and sees a red rose for the first time. Does she learn something new?

If yes, then physical knowledge isn't all there is — there's something about subjective experience (qualia) that can't be captured by science. If no, then consciousness is fully reducible to physics. Philosophers have been arguing about this for forty years and show no signs of stopping.

Veil of Ignorance (1971)

Posed by: John Rawls Field: Political Philosophy The question: What society would you design if you didn't know your place in it?

You're designing a new society from scratch. But there's a catch: you don't know who you'll be in it. You might be rich or poor, healthy or disabled, male or female, any race, any intelligence level. You're behind a "veil of ignorance."

Rawls argued that behind this veil, rational people would design a society with strong protections for the worst-off — because you might be the worst-off. It's a powerful argument for social safety nets, and it strips away self-interest from political reasoning.

The experiment's weakness is also its strength: nobody actually reasons this way in practice. We always know who we are. But imagining that we don't reveals how much of our political thinking is just rationalized self-interest.

Prisoner's Dilemma (1950)

Posed by: Merrill Flood & Melvin Dresher Field: Game Theory The question: Do you cooperate or betray when you can't communicate?

Two prisoners are arrested and separated. Each can either cooperate (stay silent) or defect (testify against the other). If both cooperate, they each get 1 year. If both defect, they each get 5 years. If one defects and one cooperates, the defector goes free and the cooperator gets 10 years.

The rational choice is always to defect — regardless of what the other person does, you're better off betraying. But if both players are rational, they both defect and get 5 years each, when they could have gotten 1 year by cooperating.

This isn't just a puzzle. It's the mathematical foundation of trust, competition, arms races, climate negotiations, and every situation where individual incentives conflict with collective benefit. The iterated version (played repeatedly) shows that cooperation can emerge — but only if there's a future.

Philosophical Zombie (1996)

Posed by: David Chalmers Field: Consciousness The question: Could someone behave identically to you but have no inner experience?

A philosophical zombie (p-zombie) is physically identical to you in every way. Same neurons firing, same behavior, same words. But there's nothing it's "like" to be them. No inner experience. No consciousness. Just a biological machine producing outputs.

If p-zombies are even conceivable — if you can imagine one without contradiction — then consciousness isn't purely physical. It's something extra, something that physical facts alone don't explain. This is Chalmers' "hard problem of consciousness," and it's the reason neuroscience can map every neuron in your brain and still not explain why you experience anything at all.

The Experience Machine (1974)

Posed by: Robert Nozick Field: Ethics The question: If a machine could give you any experience you want, would you plug in permanently?

A machine can simulate any experience — winning the Nobel Prize, falling in love, climbing Everest. From the inside, it's indistinguishable from reality. You'll never know you're plugged in. Would you choose the machine over real life?

Most people say no. Nozick used this to argue against hedonism — the idea that happiness is all that matters. We care about actually doing things, actually being a certain kind of person, and actually living in reality — not just experiencing the feeling of doing so. If you wouldn't plug in, you value something beyond experience.

Infinite Monkey Theorem (~1913)

Posed by: Émile Borel Field: Probability The question: Given infinite time, could random chance produce Shakespeare?

A monkey hitting keys at random on a typewriter, given infinite time, would almost surely type the complete works of Shakespeare. The probability of any specific sequence is absurdly small — but with infinite attempts, even absurdly small probabilities become certainties.

The real insight isn't about monkeys or Shakespeare. It's about what infinity actually means. Our intuitions about probability completely break down at infinite scales. The theorem also raises questions about creativity: if random processes can produce Shakespeare given enough time, is there anything special about the process that actually produced Shakespeare?

Twin Earth (1973)

Posed by: Hilary Putnam Field: Philosophy of Language The question: If two substances look, taste, and behave identically but have different compositions, are they the "same thing"?

Imagine a planet identical to Earth in every way, except what they call "water" isn't H₂O — it's a different chemical compound, XYZ, that looks, tastes, and behaves exactly like water. When you say "water" and your Twin Earth counterpart says "water," do you mean the same thing?

Putnam said no. Meaning isn't just in your head — it depends on what's actually out there in the world. This is "semantic externalism," and it changed how philosophers think about language, meaning, and mental content. Your thoughts aren't just brain states — they're partly constituted by your environment.

Ticking Time Bomb (varies)

Posed by: Various Field: Ethics The question: Is torture justified if it could prevent a catastrophe?

A terrorist has planted a bomb that will kill thousands. You've captured them. They know where the bomb is. Time is running out. Is it morally permissible to torture them for the information?

This experiment is designed to test absolute moral principles. If you believe torture is always wrong, the ticking bomb forces you to accept thousands of deaths to maintain that principle. If you believe torture is sometimes justified, where exactly do you draw the line?

The scenario is deliberately unrealistic — in practice, you'd never have perfect certainty about the bomb, the suspect's knowledge, or torture's effectiveness. But that's the point. Thought experiments strip away practical complications to expose the raw structure of moral reasoning.

Utility Monster (1974)

Posed by: Robert Nozick Field: Ethics The question: If one being gets more happiness from resources than everyone else combined, should they get everything?

Imagine a being that derives enormously more utility — happiness, satisfaction, pleasure — from every resource than any normal person. According to utilitarianism (maximize total happiness), we should give this "utility monster" everything, even if it means everyone else gets nothing. Total happiness would be higher.

Nozick designed this to expose a flaw in utilitarian thinking: it can justify extreme inequality if the math works out. The utility monster shows that "maximize total happiness" doesn't necessarily mean "distribute fairly." Most people's moral intuitions rebel against feeding the monster — which suggests our moral framework is more complex than simple utility maximization.


The Dataset

All 15 experiments are available as a downloadable dataset on dtbse, with philosopher, year, field, and the core question for each.

Explore the Famous Thought Experiments dataset →

Each one is also available as a matchup — which thought experiment is more mind-bending? Cast your vote and see how the world ranks them.

Vote on thought experiments →

Frequently asked questions

What is the Trolley Problem?

Posed by Philippa Foot in 1967, the Trolley Problem asks whether you'd pull a lever to divert a runaway trolley away from five people but onto one. Most say yes. The variant — physically pushing someone off a bridge to stop the trolley — flips most intuitions. It exposes the conflict between consequentialist and deontological ethics and underpins self-driving car programming.

What is the point of Schrödinger's Cat?

Schrödinger proposed it in 1935 to argue that the Copenhagen interpretation of quantum mechanics led to absurd conclusions when scaled up. A cat sealed with a radioactive trigger should be both alive and dead until observed. The real question is where the boundary lies between quantum superposition and classical reality — a problem physicists still debate 89 years later.

What does the Ship of Theseus thought experiment mean?

First posed by Plutarch around 100 AD, it asks whether a ship is still the same ship if every plank has been replaced. Hobbes added the twist of rebuilding the original from the discarded planks. It's the foundation for debates about personal identity (your cells replace themselves), intellectual property, and corporate continuity.

Why are thought experiments important if they aren't real?

Thought experiments cost nothing to run — no lab, no funding, no ethics board — and they strip away practical complications to expose the raw structure of reasoning. They've shaped ethics, physics, and consciousness research for 2,400 years. Every time you debate AI understanding, simulation theory, or trolley-style ethics, you're using one.

Next post

The 15 Greatest Ice Cream Flavors, Ranked by the World