Given that it’s Halloween, I thought for this month’s post I’d get a bit festive by talking about 13 spooky concepts in philosophy, and I could think of nothing spookier than thought experiments. As I explained in my philosophical zombies post, thought experiments are a means of obtaining insight on an idea by using the parameters of the experiment as a case which to compare our existing intuitions about the world. Thought experiments exist in many fields, including physics, economics, mathematics, and especially philosophy. For this post, we’re going to focus on thought experiments that are strictly from philosophy whose contents or conclusions have spooky implications or fit the season’s motif. Use the slider below to navigate between thought experiments
What’s it like to be a bat?
As weird as this question sounds, this is exactly what Thomas Nagel asked in his 1974 paper by the same name. This unusual question is a rather clever way of invoking some of the same ideas as the philosophical zombie thought experiment, mainly the persistence of the mind-body problem and the irreducibility of consciousness to physicalism. Nagel’s attempts to reason his way to what it’s like to be a bat using objective facts that we know about bats’ experiences – they fly, use echolocation, and sleep upside down – are meant to show how impossible the task appears. His point is that mere descriptions of these facts can’t reveal the subjectivity or the inner life of the bat. Effectively, every conscious thing has an interpretation of what it’s like to be itself which isn’t reducible to anything according to Nagel. Nagel suggests that this analysis isn’t limited to just bats. Indeed, it even applies people distinct from ourselves (unless, of course, they’re p-zombies)!
The curious case of the Frankenstein fission
(Fission brain cases)
There are a series of thought experiments that philosophers have invented to tease out the metaphysics of personal identity. Often these scenarios involve catastrophic accidents and whole body transplants – like moving one person’s head onto someone else’s body. For the sake of this entry, we’re just going to focus on one of these, the fission case, which is perhaps the least intuitive of all.
To illustrate fission, let’s assume that there are three triplets, Mary, Teri, and Carrie. Teri and Carrie get into an accident that damages Teri’s brain’s right hemisphere and Carrie’s brain’s left hemisphere. Mary decides to undergo surgery to give each sister the appropriate missing brain hemisphere so that they can fully recover. The question is what happens to Mary. After the surgery, Mary is clearly dead in the sense that her brain was bisected and taken out of her body, but as we know in the case of real split-brained patients, whose left and right hemispheres aren’t in communication, severed hemispheres appear to experience their own sense of consciousness. But even if this appearance is deceiving, Mary’s hemispheres contain the neurons formed from Mary’s memories, experiences, beliefs, and desires. This is what philosophers call psychological continuity, and because of this, some philosophers like Derik Parfit suggest that it’s wrong to assume that personal identity is the only thing that matters in terms of survival. So while the person Mary no longer has her body, for someone like Parfit psychological continuity is still just as good.
Conjuring the Cartesian Demon
(The evil demon)
This one is an oldie but a goodie that’s been with us since 1641, appearing in René Descartes’ Meditations on First Philosophy which was briefly mentioned in this blog’s first epistemology post. As part of his exercise in radical skepticism, Descartes came to question the role of our senses in producing knowledge about the world. His concern, regarding the extent to which our senses were reliable, conjured up a demon. Not just any demon, though, but an epistemological demon!
Descartes’ demon, often referred to as the evil demon or evil genius is a hypothetical entity so powerful that it controls a person’s reality. Much like Neo couldn’t tell he was in the Matrix (until he used hax) a person under the influence of this demon wouldn’t realize it, as their senses would function like normal. The only difference is that they would have experiences created by the demon as opposed to experiences generated organically through their own interaction with reality. The Cartesian Demon, though fantastical, is one of the strongest skepticism arguments ever created and has spawned a whole series of related thought experiments. There are also versions of the demon problem, proposed by contemporary philosophers, which aren’t about the problem of skepticism. Those, however, are for another time.
Do you accept the Repugnant Conclusion?
(The mere addition paradox/repugnant conclusion)
Derik Parfit makes another terrifying contribution to this list. In his 1984 book Reasons and Persons, Parfit discusses a population ethics paradox called the mere addition paradox. To begin illustrating it, look at the image below:
Every letter represents some hypothetical population, and for each population’s graph, height represents that population’s total happiness and the width represents the population’s size. Since we are concerned with the total happiness of these populations, height alone isn’t our focus. We can maximize total happiness by either making existing individuals within the population happy or by adding additional persons so that the sum of total happiness increases.
With this in mind, while population A looks good, A+ is better because there is a subgroup in population A+ with a positive quality of life which makes the total happiness higher than what’s found in population A. The same analysis can be made between A+ and B-. B- is just B divided into two groups, so they’re the same. But it follows that if B- is better than A+, and if A+ is better than A, then B is better than A. We can then start introducing other populations whose heights are smaller, but whose total areas are greater than any preceding populations until we get to Z:
Z has the lowest happiness level for any particular individual, but given the size of the population, the total happiness of Z is greater than A which has very few individuals with high levels of happiness. The repugnant conclusion is that using this logic population Z, which clearly has multitudes of individuals who are personally less happy than their counterparts in any other population, is far better than population A which has a much smaller population with persons who are individually much happier.
Men, martians, and pain
(Mad pain, Martian pain)
Seeking to find a comprehensive philosophical theory of mind to describe pain, David Lewis uses a thought experiment to illustrate two cases. Writing Mad Pain and Martian Pain in 1983, Lewis gives us two cases of pain. The first is the case of the “madman.” When the madman feels pain the same neurons fire as in everyone else, but he starts snapping his fingers among other strange things and does nothing to relieve himself of the state because his abnormal neurology allows him to enjoy it. The second case is the Martian whose physiology differs from ours. Instead of being triggered by neurons, the Martian’s pain is triggered by hydraulic pressure building in some cavity of the Martian’s body. The Martian, like most humans, seeks to avoid this state as it’s uncomfortable for him.
Other philosophers have questioned whether or not both cases actually constitute pain, but assuming they do, Lewis’ thought experiment effectively serves as a critique of naïve approaches to two kinds of theories of mind – identity theory and functionalism. Without getting too bogged into the details (physical type) identity theory says that mental states (thoughts, feelings, beliefs) are synonymous with the physical states occurring concurrently with those same mental states. If my thoughts about finishing this post result in the firing of specific neurons in a unique pattern, then for all purposes my thought can be reduced to these specific sequences of neurons firing in my brain. Functionalism is focused on the causal roles any particular set of mental states play in relation to the senses, observed behaviors, or to one another. Functions, in this case, should be thought of as “modules” or software.
Here, with our madman and Martian cases, the identity theorist cannot recognize the Martian’s pain and the functionalist using a limited view of functionalism can’t recognize the madman’s pain. Lewis modifies the functionalist view so that it takes into account physical similarities across populations based on the role these typically play within those populations.
This thought experiment comes to us from Donald Davidson in his 1987 paper Knowing One's Own Mind. As part of his thought experiment, Davison imagines that he is struck by lightning and killed while hiking in a swamp. Coincidentally, at the same time, another lightning bolt strikes matter in some other part of the swamp in such a way that it takes the form of Davison. For the sake of the scenario we’re not assuming something like the dualism alluded to in Nagel’s bat case or the p-zombie thought experiment, and for all intents and purposes the Davison who emerges from the swamp — the swampman — is a perfect duplicate of Davison with the same form and memories as the man who died.
The questions of identity brought up by this scenario are numerous, but Davison the philosopher constructs this case to talk about memory and belief. Davison states that it’s impossible for the swampman to engage in the same cognitive behavior as him, despite having the same memories and beliefs. Why? Well from a strictly functionalist view, it’s true that the swampman is the same. When the swampman sees Davison’s colleagues and greets them, the same set of neurons fire and the swampman is able to say the names of these individuals. But for Davison, since the swampman has no causal continuity with any object in the world, by definition he’s not engaged in the same cognitive activities.
This is a semantic point, so let’s unpack that. Unlike Davison, the swampman did not create his memories through interaction with the world, which is what is meant by “causal continuity.” Rather, the swampman merely mimics Davison’s memories when he (it?) assumed Davison’s form. If something like the act of recognition requires an initial event, like a specific interaction which formed a specific memory, then the swampman is merely emulating Davison’s cognitive functions. Thus the swampman’s utterances according to Davison are nonsense because they don’t have referents or real objects to which they refer. Since the swampman only knows the world through what are effectively implanted simulations of Davison’s own memories, the referents of its words are not to things of the external world but to nothing.
Brave New World
Philosophers aren’t the only ones capable of creating good philosophical thought experiments. This thought experiment uses the acclaimed dystopian novel by Aldous Huxley to illustrate some of the complications that our ethical intuitions can create. At least two philosophers, Steve Peterson and Jeff Mcmahan*, have alluded to or mentioned the novel when discussing the non-identity problem.
The story entails genetically engineering populations of people with specific aptitudes so that they can fill designated roles in society. The population performing society’s menial tasks, the Delta caste, was designed in such a way that they excel at manual labor and would not enjoy doing anything else. However, the act of engineering humans in this manner effectively condemns entire swaths of people to be servants which, to say the least, seems distasteful. Ethicists have articulated, though, that our common notions of harm make it difficult to reasonably explain the nature of the harm done to the Delta caste due to the non-identity problem. At the crux of this problem is the fact that the act we would consider the “harm,” in this case creating people with features that make them condemned to servitude, is something their very existence hinges upon. Were the system of Brave New World not in place then, as far as we know, none of the individuals in the Delta caste would exist because the gametes selected to create these particular individuals would never be fused and gestated.
The purpose of the non-identity problem isn’t to necessarily suggest that a wrong has not taken place, but to reveal that our standard intuitions about what harm is – making a specific person worse off than they otherwise would have been – is insufficient for wholly capturing the wrong in this case, because if the harm in question is corrected, then these individuals aren’t made better off, they simply won’t exist. Assuming the lives of any Delta caste members are worth living, existence would be preferable to non-existence for these individuals and there’s no other kind of life they could have. Fully addressing the complexities of this issue is something beyond this post, so consider this an appetizer to a future post.
*My partner shared an antidote where years ago in one of her freshman philosophy classes Mcmahan spoke about a scenario similar to the novel, but I'm not aware of instances of him discussing it in any of his publications.
Experience it live!
(The experience machine)
Robert Nozick in his 1974 book Anarchy, State and Utopia used the idea of the experience machine to attempt to defeat the idea of ethical hedonism, which is akin to an individual utilitarian maximizing principle. For the thought experiment, Nozick imagines that there is some machine which we could choose to place ourselves into in order to simulate any lifelike pleasurable experience we wanted. There are different variations on the experiment, some proposing that individuals could choose to spend their entire life in the machine. Nozick suggests that many people, even hedonists might not plug in because real experiences are more meaningful than simulated experiences.
The cow who wants to be eaten
Yet another example of a thought experiment from science fiction! Readers of Douglas Adam’s Hitchhiker’s Guide to the Galaxy trilogy will recall that in the Restaurant at the End of the Universe, the protagonists encounter a sentient cow who entices them to consume parts of its body. There are a number of philosophical questions brought about by this scenario. For example, there’s the ethics of creating such a creature as well as the ethics of consuming it. The former was invoked by the Brave New World entry, so let’s talk about the latter.
One of the biggest motivations for veganism in the Western world is likely animal welfare – mainly the suffering and procedural murder of sentient life such as cows. The primary consideration that this case is asking us to make, as absurd as it is, is whether or not consent alleviates the moral wrong of killing and consuming a self-aware (even human-like, in this case) creature. While many people already take animal suffering to be worthy of moral consideration, the fact that this cow has human-level intelligence might create additional moral obligations that we would want to consider, even if the animal consented to or actually desired to be killed and eaten. The closest legal debate we have to this thought experiment is the issue of euthanasia, though that doesn’t exactly capture the particular nuances of this case. Still, the parallel might be worth considering because the ethics of killing here could conceivably center around the issue of whether the cow, or any human-like intelligence for that matter, has the moral right to dispose of their bodies in any way they please.
The monster in your utility calculations
(The utility monster)
Do you believe in monsters? There are some ethicists that do. In another thought experiment from Anarchy, State and Utopia, Robert Nozick asks us to consider the idea of a utility monster, a person (or group) who derives far greater sums of utility from a given good than anyone else.
Consider the case of a boy who derives 950 units of utility (utils) from his grandma’s cookies because they’re really good, he baked them with her, he really loves her, and he has a rather unique psychology. Now imagine he comes across a starving man who got lost in the woods for days. The man is hungry, but let’s say he’s committed to a keto diet and grew up with a distaste for cookies. Because he hasn’t had food in days, the man has some positive quantity of utils for the cookies, maybe about 50. Who should get the cookies? Utilitarianism in its purest form says that we should maximize the amount of good available in total, and so the cookies should go to the boy because he gets the most kick out of eating his grandmother’s cookies.
The utility monster is an interesting contrast to the repugnant conclusion with both being inverted versions of one another brought about by taking utility maximizing logic to its ultimate conclusion. Although for Parfit, if we’re following a coherent total maximizing principle, Nozick’s monster is not possible. As an aside, if you’re wondering how we measure utility, in the real world we just use money and someone’s willingness to pay for something to derive how happy that good makes them. In this scenario, just assume you’re a neuroscientist who can monitor brains and correlate specific brain activity reliably with some number of utils.
It’s important to point out that many philosophers have argued that utility monsters aren’t possible because of the law of diminishing marginal utility, something that’s been observed by both economists and psychologists, but perhaps there could be a psychologically unique boy who wouldn’t share cookies with a starving man. Some social theorists, though, find the term useful for talking about resource intensive undertakings or special interest groups with burdensome political demands that diminish the public good.
In the late 1960s, philosopher Harry Frankfurt formulated thought experiments that are referred to as Frankfurt cases. The main purpose of Frankfurt Cases are to serve as illustrations for how someone could have moral responsibility even in situations where it appears they can’t actually choose different courses of action. These illustrations provide parallels to a particular argument advocating for something called incompatibilism which in a basic sense says that free will or moral responsibility is not compatible with the idea that the universe is determined.
Consider the following case taken from the Stanford Encyclopedia of Philosophy (though as it notes this particular case does not directly come from Frankfurt):
Black, an evil neurosurgeon, wishes to see White dead but is unwilling to do the deed himself. Knowing that Mary Jones also despises White and will have a single good opportunity to kill him, Black inserts a mechanism into Jones’s brain that enables Black to monitor and to control Jones’s neurological activity. If the activity in Jones’s brain suggests that she is on the verge of deciding not to kill White when the opportunity arises, Black’s mechanism will intervene and cause Jones to decide to commit the murder. On the other hand, if Jones decides to murder White on her own, the mechanism will not intervene. It will merely monitor but will not affect her neurological function. Now suppose that when the occasion arises, Jones decides to kill White without any “help” from Black’s mechanism.
The argument which Frankfurt was constructing a case against, called the Principle of Alternative Possibilities (PAP) states that if the universe is determined – like through the laws of physics – then moral responsibility cannot exist because an agent does not have alternative possible choices as their actions are already determined or decided by whatever factor(s) makes the universe determined. The parallel clearly uses Black’s device as a backdrop for introducing determinism into Mary’s world, but does it work? While many philosophers think that Mary freely chooses to kill White, to some it’s not clear that the PAP was actually defeated. In a future post, we can go into more detail about disagreement on Frankfurt Cases.
(The brain in a vat)
The brain in a vat thought experiment is the modern-era update for Descartes’ evil demon. Instead of a powerful supernatural entity, philosopher Gilbert Harman proposed a scenario in which a human brain was placed into a vat of nutritional fluids and attached to wires that would stimulate the brain in the same way it’s stimulated during organic experiences. This simulation can be ran by a mad scientist, aliens, a superintelligence or whatever you’d like. Since we covered skepticism with Descartes’ demon, and even earlier in this blog’s history, we’re not interested in illustrating skepticism with this idea.
One case involving the brain in the vat (BIV), by Hilary Putnam, actually attempts to weaken its skepticism inducing powers. Scaling the scenario on a massive level (Putnam has us assume every brain on earth is in a vat), Putnam muses how our situation is kind of like a world that is an unmanned BIV simulation. We’re the brains, and the universe is the machinery. He also points out that because brains in vats can only recognize the internal world created through the simulation that prepositions such as “there is a tree in front of me” for a vat citizen whose mind is being beamed the image of a tree can’t be immediately be taken as false. The referent for the word “tree” in the vat citizen’s experience isn’t the same as the concept of a tree in our world or some supposed world where the brain was not “envatted.” Perhaps the referent for the experience of seeing a tree in vat-land would be to the neurons being triggered in the vat citizen or to the code in the simulation generating the image, but it would still be referring to something. What Putnam’s suggesting is that in a world we were envatted (and had evidence to believe that we always were) we don’t necessarily have to entertain skepticism about our senses like the original experiment suggests we should. Essentially, if we are the ones who are brains in vats, then it's our terms, the ones that we use every day, which successfully refer to aspects of the simulation.
You (probabilistically) can’t escape the matrix
(The simulation hypothesis)
This is yet another entry to get you to question the nature of your reality. This thought experiment was proposed by the philosopher Elon Musk Nick Bostrom in his 2003 paper Are You Living in a Computer Simulation? The simulation argument is probably one of the most misunderstood thought experiments in modern philosophy. It’s worth noting that despite the similarities to Descartes’ evil demon, the brain in a vat and other skepticism-inducing arguments, the simulation argument is not really similar. The argument uses anthropic reasoning and several fundamental assumptions to argue for the possibility that we’re living in a computer simulation. In this regard, Bostrom’s argument is much like many cosmological arguments about why the universe supports life. Bostrom’s argument makes the following assumptions:
- Consciousness is substrate‐independent, that is, the type of material needed to create conscious experiences isn’t limited to what we observe biologically. There are theories of mind that posit how this could be true (computationalism for example), but for Bostrom, he’s not actually committed to the truth of these metaphysically. His assumption allows for conscious experiences to be computable and translated to the electrical inputs and outputs of something like a really big circuit board.
- Our descendants, presumably some posthuman individuals, will have the ability to harness stupendous amounts of energy and thus computational power. Presumably this will be enough power to create multitudes of vast “ancestor-simulations” (hopefully alongside more interesting ones) without even tapping into a fraction of this civilization's energy resources. Bostrom’s paper goes into detail about the specific amount of power needed, but the full details aren’t needed to grasp the crux of the argument.
Assuming it’s possible for our civilization to get to this posthuman stage and make our own simulations, then it’s probabilistically true that we actually are in one of these ancestor simulations given that there could be a greater number of these simulations than people currently in existence. In other words, for any one human mind alive today, there’s a greater probability of them having been born into a simulated reality if post-human populations with simulations can exist.
This idea might seem weird but it emerges from reasoning from an anthropic principle. Anthropic reasoning tries to deal with selection effects and incomplete information that arise due to an observer’s position. This is why anthropic reasoning is often applied to issues that are cosmological in scope. It’s difficult from our position to determine exactly how common life is in the universe, or how common it is to even have universes that are capable of supporting life if multiverse theory is true.
What Bostrom is trying to illustrate is that given his assumptions about the number of universes that will be simulated, is that any observer who finds themselves in the year 2018 asking themselves if they’re in a simulation could be any number of persons doing the same in one of the astronomically large number of simulations that will be created. So if a person didn’t have enough information to conclude which world she were in, odds are she’d be better off betting that she was in a simulation.
Bostrom’s hypothesis doesn’t actually argue for this world being a simulation but it suggests that one of three things is possible:
- "The fraction of human-level civilizations that reach a posthuman stage (that is, one capable of running high-fidelity ancestor simulations) is very close to zero", or
- "The fraction of posthuman civilizations that are interested in running ancestor-simulations is very close to zero", or
- "The fraction of all people with our kind of experiences that are living in a simulation is very close to one."
Which thought experiment did you think had the most chilling implications? Are there any favorites of yours that you think were missing and wish were on the list? Comment below or share your thoughts on Twitter @Philosimplicity