Maybe You Should Plug In
The experience machine thought experiment is philosophy’s most famous argument against hedonism. Fifty years of scholarship suggests it’s much weaker than you think
Suppose someone offered you a life of perfect happiness. Not a distant approximation, not a pretty-good deal with some fine print, but the real thing: every experience you have ever wanted, stitched together seamlessly, indistinguishable from reality, guaranteed for life. The catch is that you would be floating in a tank with electrodes attached to your brain, and none of it would be ‘real’.
Robert Nozick first proposed this scenario in 1974, in Anarchy, State, and Utopia (later restatements make it a cleaner whole-life choice), and for half a century it’s been a staple of introductory ethics courses: a quick execution of hedonism. The argument goes like this. If happiness were all that mattered, you should obviously plug in. You wouldn’t plug in. Therefore happiness isn’t all that matters. QED. Next topic.
I believed some version of this for years. It felt obvious. Of course I wouldn’t plug in. I care about reality and authenticity. I care about doing things for real, about being a good person, about my relationships being with people who exist outside my skull.
But the more I read the academic literature on the experience machine, the less obvious any of this became. The thought experiment turns out to be one of the most methodologically contested cases in contemporary philosophy. The “obvious” refusal to plug in is shot through with biases, conflations, and question-begging assumptions that decades of careful work have exposed. The short version: plugging in might not be crazy, and the reasons you think you’d refuse might not be the reasons you think they are.
What Nozick Actually Said
Nozick’s original version asks you to imagine “superduper neuropsychologists” who can stimulate your brain so that you think and feel you are writing a great novel, making a friend, or reading an interesting book. All while floating in a tank. He gives three reasons why we should refuse:
We want to do certain things, not merely experience doing them.
We want to be a certain kind of person, not an “indeterminate blob” floating in a tank.
We want contact with a “deeper reality,” not a “man-made” one.
He even calls plugging in “a kind of suicide.”
Later literature often groups these concerns under the label “authenticity,” though that’s a reconstruction rather than Nozick’s own master term. His three strands of objection are distinct, and lumping them together under a single heading creates confusion. Still, “authenticity” is the convenient shorthand, and I’ll use it here.
These are powerful intuitions. I feel them. You probably feel them too. The question is whether feeling them settles anything.
The Inference That Doesn’t Work
Here is the first problem, and it is embarrassingly simple. Nozick’s argument, at least as commonly presented, runs:
If hedonism were true, you should plug in. You wouldn’t plug in. So hedonism is false.
But Alex Barber spotted a problem that should have been obvious from the start. From “you wouldn’t plug in” it does not follow that hedonism is false. It follows, at most, that you are not a committed hedonist. Those are different claims. A theory of well-being says what is good for you, not what you would choose. People make choices that are bad for them constantly. If I refuse a surgery that would save my life because I’m afraid of hospitals, my refusal doesn’t prove that living longer isn’t good for me; it proves I have a phobia.
This is not a pedantic distinction. It’s the primary fault line running through the entire debate.
Would You Choose It? vs. Would It Be Good for You?
The single most important move in the modern experience machine literature comes from Eden Lin, who argued in 2016 that the whole “would you plug in?” framing is the wrong test. Choice questions are contaminated by all kinds of things that have nothing to do with welfare: risk aversion, disgust, fear of the unknown, moral commitments, aesthetic preferences, and, as we’ll see, a massive status quo bias.
Lin’s alternative is elegant. Forget the choice question. Instead, compare two lives that are experientially identical: same feelings, same phenomenology, same hedonic quality, moment by moment. One is real. One is generated by a machine, and the person inside doesn’t know it. Now ask: is the real life better for the person living it, even though it feels exactly the same?
If you say yes, you have a reason (Lin says “some reason,” not a decisive one) to doubt hedonism. If you can’t see a welfare difference, hedonism survives. The key thing is that this test doesn’t rely on what you’d choose. It asks directly about the welfare comparison. That is a cleaner question.
Jennifer Hawkins makes a related point in her contribution to the Routledge Handbook: asking whether people would choose machine life is methodologically problematic, because choices can be driven by considerations that have nothing to do with self-interest. You might refuse to plug in because you think it would be morally wrong to abandon your children, or because you find the aesthetic of tank-dwelling repulsive, or because you have a perfectionist commitment to real achievement. None of those are claims about what’s good for you in the welfare sense. They’re claims about what you value for other reasons.
This distinction matters. If we’re supposed to be testing a theory of well-being, we need evidence about well-being, not evidence about all the things people care about when making life decisions. The best version of the experience machine argument threatens not just hedonism specifically, but any theory that says welfare is exhausted by conscious states. The stakes are higher than “is pleasure all that matters?”
Why You Really Don’t Want to Plug In (and Why That’s Suspicious)
Okay, but take the intuition seriously for a moment. Why do you refuse? There’s a long list of candidate explanations, and most of them are embarrassing for the anti-hedonist.
You’re scared. The machine could malfunction. The scientists could die. The power could go out. Society could collapse. These are all perfectly rational fears, but the thought experiment asks you to set them aside (Nozick says to “ignore” such practical worries) and focus on whether reality itself is what you’d miss. People are bad at bracketing those fears, though, and the imagery of floating in a tank with electrodes in your brain activates a revulsion that is very hard to suppress.
You’re grossed out. Tanks, wires, electrodes, the vague association that you’re in The Matrix. There’s a body-horror quality to the experience machine that contaminates the thought experiment. Ben Bramble catalogues several of these “irrational fear/revulsion/bias” objections in his survey of the debate, and they’re hard to dismiss. If the experience machine were described as a magic pill that made your life strictly better in every experiential dimension, would you still refuse? (Spoiler: most people wouldn’t. We’ll get to that.)
You think it’s morally wrong, not prudentially bad. Matthew Silverstein makes this point forcefully. When we reject the machine, we might be responding not to welfare considerations but to moral, aesthetic, or perfectionist ones. You might think it’s wrong to abandon your responsibilities. You might think it’s undignified to float in a tank. You might think it matters that you live in contact with truth. But “it matters” and “it’s good for you” are not the same claim, and Nozick’s argument only works if the refusal tracks welfare specifically.
You’re a victim of status quo bias. And this is the big one.
The Reverse Experience Machine: De Brigard’s Bombshell
In 2010, Felipe De Brigard published a paper that injected some novelty into the experience machine debate. He flipped the scenario. Instead of asking people whether they would enter the machine, he told them they were already in one and asked whether they’d want to leave.
The results were striking. When subjects were told that reality outside the machine was a grim prison life, 87% preferred to remain connected. When reality was the life of a glamorous multimillionaire artist in Monaco, responses split 50/50. In a neutral version, 54% preferred reality. And in a second neutral version that emphasized the life outside would be quite different from the one they knew, 59% preferred to stay in the machine.
Think about what that entails. If people deeply valued reality as such (independent of what that reality contained), then discovering you’re in a machine should make you want to unplug, regardless of what awaits you. But De Brigard’s results show a pattern that is at least highly consistent with status quo bias: people seem strongly pulled toward whatever they take to be their current life, real or simulated.
Status quo bias is one of the best-documented cognitive biases in psychology. People prefer what they already have. They overvalue their current state merely because it is current. In Nozick’s original version, reality is the status quo and the machine is the change. In De Brigard’s reverse version, the machine is the status quo and reality is the change. And in both cases, people largely prefer whatever they already have.
Dan Weijers turned this into an explicit methodological argument in 2014, invoking the reversal test from Bostrom and Ord: if you want to check whether an intuition is tracking something real or is merely a framing artifact, reverse the frame and see if the answer flips. With the experience machine, the result shifts sharply, and sometimes reverses. In Weijers’s redesigned, relatively bias-reduced “Stranger no status quo (NSQ)” scenario, responses split roughly 55% pro-machine, 45% pro-reality. That is a far cry from the near-unanimous refusal the thought experiment is often taken to reveal.
Now, these results don’t prove a hedonist should plug in. Rach Cosker-Rowland pushes back, arguing that De Brigard and Weijers haven’t shown that experience machine intuitions are epistemically worthless. The reverse machine avoids pro-reality status quo bias only by introducing pro-machine status quo bias, since in those vignettes the machine life is what subjects already have. That’s fair. But even Cosker-Rowland concedes that the methodology matters significantly. The old textbook confidence (”obviously you’d refuse, therefore hedonism dies”) is no longer tenable.
The Pill Test: What Authenticity Is Really About
One of the most revealing experiments in this literature comes from Frank Hindriks and Igor Douven, who published a study comparing three scenarios: the classic experience machine, an “experience pill,” and a “functioning pill.”
The experience machine is the full Nozick: floating in a tank, total disconnection from reality, whole life simulated. The experience pill gives you better experiences and emotions, but you’re still in the real world, still interacting with real people, still embedded in your actual life. The functioning pill goes further: it enhances your actual capacities, making you perform better in the real world.
In their first experiment, 29% accepted the classic machine, 53% accepted the experience pill, and 89% accepted the functioning pill. A second experiment with a different valence structure replicated the same gradient.
This gradient is telling. It suggests that what people really object to isn’t pleasure or positive experience per se. It’s the total severance from reality: the tank, the disconnection, the loss of agency and embeddedness in the world. As the intervention becomes less invasive, less total, less authenticity-threatening, acceptance skyrockets. By the time you reach the functioning pill (which is much closer to an ordinary enhancement case than to life in a vat), nearly everyone says yes.
This is important because it reframes what the experience machine is testing. Nozick thought it tested whether we value pleasure above all else. But it seems to test a different question: how much disconnection from reality are you willing to accept for a given hedonic payoff? And the answer, according to Hindriks and Douven’s data, is: people are fine with hedonic boosts as long as they stay in the real world. What they object to is total replacement of reality, not enhancement of it.
The Debunking Move: Why You Think Reality Matters
Even if you resist the status quo bias story, there’s another pro-hedonist argument that’s hard to shake. Silverstein’s deeper point is that our desires were formed in a world where reality-tracking normally served happiness. We learned, individually and collectively, that deception leads to bad outcomes, that contact with actual people tends to produce better lives than isolation, that tracking truth helps us avoid disaster. So we developed a motivational system in which reality-contact, authenticity, truth, and real achievement are deeply valued. But these desires exist because they reliably produce good experiences in ordinary life, not because reality is independently valuable. The experience machine is a bizarre edge case where those normally reliable dispositions misfire.
Our anti-machine intuitions can therefore be explained entirely within a hedonistic framework: they’re the echoes of a desire-formation process that served us well in ordinary circumstances but is responding to a scenario (perfect, incorrigible simulation) that our motivational system was never calibrated for.
Ben Bramble presents Roger Crisp as offering a more explicitly evolutionary version of the same story: authenticity and real accomplishment were fitness-enhancing, which explains why we value them, without showing they are intrinsically good for us. The intuition that reality matters feels like evidence about welfare. But it might just be evolution pulling strings.
This is a debunking argument, and like all debunking arguments it can be resisted. You can argue that an explanation of why we hold an intuition doesn’t show the intuition is wrong. (After all, we evolved to detect predators, and the fact that this ability is explained by natural selection doesn’t mean predators aren’t real.) But the debunker’s point is more specific: in this case, the explanation provides a complete alternative account of the intuition without requiring reality to be independently welfare-conferring. And that shifts the burden onto the anti-hedonist.
The Best Anti-Hedonist Version
I don’t want to overstate the pro plugging-in case. Lin’s framework, where you compare experientially identical lives and ask which is better for the person, is probably the strongest version of the anti-hedonist argument available, and it’s hard to dismiss entirely.
Consider two people: Alice lives a rich, fulfilling life with close friendships, meaningful work, and hard-won achievements. Beth has all the same experiences, moment by moment, phenomenologically identical, but hers are generated by a machine. She doesn’t know this. She feels exactly the same love, pride, accomplishment, and satisfaction that Alice feels.
Is Alice’s life better for her than Beth’s is for Beth?
Seems difficult to say for sure. My gut says yes, Alice’s life is better. But when I try to articulate why, I keep circling back to things Alice has that Beth doesn’t: real friends, real accomplishments, a real relationship with the world. And the obvious hedonist reply is: but Beth has exactly the same subjective life. The same felt love, the same pride, the same sense of intimacy and accomplishment and satisfaction. The only difference is a metaphysical fact (reality vs. simulation) that never enters either woman’s consciousness.
If you’re tempted to say “but Alice’s friends are really there,” ask yourself: really there for whom? Alice can’t tell the difference. If a welfare difference exists that can never, even in principle, be detected by the person whose welfare is in question, that’s a strange kind of welfare difference. Not necessarily a nonexistent one. But strange.
Lin himself reaches only a “moderate” conclusion: the experience machine gives us some reason to doubt hedonism, not a decisive one. After reading the literature, I think that’s about right.
What the Matrix Got Wrong
It’s worth noticing how badly pop culture mishandles this. In The Matrix, choosing the red pill is obviously correct because the simulated world is controlled by malevolent AI overlords who are using humanity as batteries, the simulated world contains suffering and injustice, and the characters who choose the blue pill (Cypher) are portrayed as traitors. None of these features are present in Nozick’s machine. The machine isn’t malevolent. It doesn’t exploit you. It gives you exactly the life you want. The Matrix is an easy case dressed up as a hard one.
The hard case is the one Nozick poses: a benevolent machine that gives you a life of perfect subjective quality, at the cost of (unknown to you) disconnection from reality. And that case, it turns out, is much harder than introductory philosophy courses make it seem.
So Where Does This Leave Us?
I think Nozick’s thought experiment is still one of the most illuminating pressure tests in philosophy. It vividly isolates a set of intuitions about reality, agency, truth, and authentic personhood that any theory of well-being needs to account for.
But I also think the simple classroom version of the argument is dead. It was killed by five converging lines of research:
The choice-vs.-welfare distinction (Lin, Hawkins, Barber): what you’d choose is weak evidence for what’s good for you.
The status quo bias findings (De Brigard, Weijers): much of the anti-machine intuition tracks a preference for the familiar, not a preference for the real.
The debunking explanations (Silverstein, Crisp): our love of authenticity can be fully explained by its historical connection to pleasure, without granting it independent welfare value.
The pill gradient (Hindriks, Douven): people accept hedonic enhancement readily once the intervention stops looking like total disconnection from reality.
The prudential-vs.-moral conflation: some of the force of the refusal comes from moral, aesthetic, or perfectionist commitments, not welfare judgments.
No single one of these is individually decisive. Together, they make it very hard to sustain the old position that refusing to plug in is obviously rational and that hedonism is obviously wrong.
What survives is the deeper question. Not “would you plug in?” but “does reality have independent prudential value, even when it makes no experiential difference?” That question is still open. Lin’s experientially-identical-lives framework keeps it alive, and the right answer might still turn out to be yes. But the quick classroom knockdown, the “obviously you’d refuse inauthenticity, QED,” the confident wielding of a single thought experiment as if it settled a 2,500-year-old debate about the nature of the good life? That’s dead.
And if you’re reading this in bed at night, staring at your phone, swimming in a stream of algorithmically curated content designed to maximize your engagement, living a life that is already substantially mediated by experience-shaping technology, the question of whether you’d plug in may be less hypothetical than you think.
You’re already partway in. The interesting question isn’t whether to enter the experience machine. It’s how to think about a world where the line between simulated and real experience is getting blurrier by the year, and where the honest answer to “would you plug in?” might increasingly be: you already have.


