Scientific American has a brief article about the Sleeping Beauty Paradox. I think that thinking about it may be illuminating for the central importance of the idea of subjective probability:
So my analysis is this:
There’s a 50/50 chance that she’s in the one-time universe. And she knows she’ll be woken up, so waking her up conveys no information. It’s a bit like the show Severance, and so she's entitled to seeing her chances of being in each universe as 50-50.
But from a different perspective: imagine this: there are two staff members in the two-wakings universe, but neither of whom knows which universe they're in. Each one has an independent one-time assignment to wake her up. Because they don't know which universe they're in, they don't know if someone else will have or already has woken her up. (The day of the week thing doesn't matter: ignore it.) There's also one staff member in the one-waking universe who also doesn’t know which universe they’re in assigned to wake her up in. Each of the three -- on a bet -- should bet that they're in the two-wakings universe, since the odds are 2-1 that they are. Thus any particular moment of her waking up is more likely in the two-wakings universe than in the one-waking universe.
That seems obvious. On the other hand Sleeping Beauty gets no more info when she’s woken up than she had before, when it was 50-50 which universe she would be in. So there's no reason for her to believe that she's more likely to be in the two-wakings universe, since she'll be woken up either way, and each experience is completely independent of any others. There is no set of events that she can refer to. Her subjective experience (each time!) is to be woken up once and only once.
So if she sees herself from outside she’s going to bet that the person waking her up is one of the pair who don’t know which universe they’re in but who would rightly bet they’re in the two-wakings universe. But she has to take that circuit, relying on other people’s subjective probability, rather than her own. More simply: subjectively she has one experience of waking up, and there's a 50-50 chance that she's in the one-waking universe when she has that experience. But vicariously she knows that the person waking her up would rightly bet that they're in the two-wakings universe. And they should bet that they're in the two-wakings universe because each would know vicariously that anyone waking her up should bet on being in that universe. The point being that the wakers know that two other wakers also have the task of waking her up, whereas she's the only Sleeping Beauty, and will only have a memory of a single experience of being woken up. (There may be a tense logic to this: her memory is part of a present-tense subjective experience of her own past, whereas the wakers are having a present-tense objective experience of the objective existence in the present of other wakers.)
Anyhow, I think it's like the well-known two-envelopes problem:
There are two envelopes, one of which contains twice the amount of money as the other. You’re given a chance to switch. Is there an advantage to doing so?
The argument for it: It’s just as though you’re flipping a coin, where the stakes are putting half of what you have at risk to double the amount. I have $10; I flip a coin and have a 50% chance of getting $20, and a 50% chance or losing only $5. Of course that’s a good bet, Pascalians!
The argument against: I know one has $10 and one has $20. 50-50 chance I have the $20 envelope and 50-50 that I have the $10 envelope. If I have the $20 and I switch I’ll lose $10. If I have the $10 I’ll gain $10. 50-50 chance either way of gaining or losing $10.
The argument for switching hits a paradox because then why shouldn’t you switch again after you’ve already switched, and do this forever?
The argument against is clear cut and obviously true, even if you use $n and $2n without knowing what n is.
In the argument for switching there seem to be three equally possible amounts of money: n, 2n, n/2. And it’s presented as though all three are possible outcomes. But only two are possible, once the envelopes are sealed. Subjectively 3, but objectively 2. So this is where Bayesian subjective probability hits an infinite loop that frequentist probability wouldn’t.
The infinite loop can be thought of this way as well: every time j you're asked whether you want to switch the expected 1.5n that you got in the envelope last time now becomes a new nj. We're no longer talking about three possibilities being shoe-horned into 50% chance for each, but 4,5,6..., i.e: an infinite number of possibilities, each of which seems to have a 50% chance of being true at the moment that you consider it. So switching an infinite number of times should yield an infinite amount of money, but that's because you get stuck in a loop that can go on infinitely because the amount of money is indeterminate from the start.
Showing posts with label subjective probability. Show all posts
Showing posts with label subjective probability. Show all posts
Monday, July 3, 2023
Sunday, October 1, 2017
Exit Monty Hall, but through which door?
Monty Hall will live on as the eponym of the Monty Hall problem.
Since it's now well-understood it might be worth recovering some of its spookiness. So I offer this recollection:
Among those who got it wrong at the time (early nineties) were the logician and philosopher Burton Dreben and the famous and eccentric mathematician Paul Erdös (I almost have an Erdös number of 2. If I can just convinced my friend to publish some sort piece with me!) And "Cecil Adams" of the Straight Dope, which is where I read about it. Marilyn Vos Savant got it right. I remember realizing that, after I read the Straight Dope take down of her, and feeling proud.
One night I explained it to Dreben with quarters over Sangria. We had three quarters, two even years and one odd. I would put them heads down (it was there coin Monty!), and ask him to pick the odd year. He'd pick, I'd flip one of the evens, he'd always stick, and lose 2/3 of the time.
Doing it that way was really eerie because there was a probabilistic ontology to the two remaining quarters, one being twice as likely to be odd as the other. They were physically unchanged and physically unremarkable, and yet this ghostly probability haunted and hung over them.
Since it's now well-understood it might be worth recovering some of its spookiness. So I offer this recollection:
Among those who got it wrong at the time (early nineties) were the logician and philosopher Burton Dreben and the famous and eccentric mathematician Paul Erdös (I almost have an Erdös number of 2. If I can just convinced my friend to publish some sort piece with me!) And "Cecil Adams" of the Straight Dope, which is where I read about it. Marilyn Vos Savant got it right. I remember realizing that, after I read the Straight Dope take down of her, and feeling proud.
One night I explained it to Dreben with quarters over Sangria. We had three quarters, two even years and one odd. I would put them heads down (it was there coin Monty!), and ask him to pick the odd year. He'd pick, I'd flip one of the evens, he'd always stick, and lose 2/3 of the time.
Doing it that way was really eerie because there was a probabilistic ontology to the two remaining quarters, one being twice as likely to be odd as the other. They were physically unchanged and physically unremarkable, and yet this ghostly probability haunted and hung over them.
Tuesday, February 10, 2015
Emotional experience as the internalization of costly signaling
Costly signaling may be a universal biological phenomenon, an essential part of the structure of all biological activity. Yeast do it and so do ants and so do humans. They do it because it is the nature of costly signaling to be reliable, and the reliability of signals is of paramount importance.
For most organisms, though, signaling is not intentional. Paul Grice distinguishes between naturalistic and non-naturalistic meaning. Naturalistic meaning is the kind of thing characteristic of symptoms of disease. A fever of 102 degrees means an infection, for example. It doesn’t mean infection because the microbe or the person it infects is trying to declare that an infection is occurring. It means infection as a natural consequence of the microbe’s interaction with the body (e.g. the body may be reacting in such a way as to make itself an inhospitable environment for the microbe).
On the other hand, if I say that I feel feverish, nothing about those words naturally means that I might have a fever. You’d need to know English, and to know that I was uttering an English sentence, to know that my saying that meant the possibility of an infection. Utterances of this sort involve non-naturalistic ways of meaning.
Now one might think that these two kinds of meaning are actually two completely different phenomena. Naturalistic meaning might better be described as evidence for or consequences of some state of affairs. Whereas nonnaturalistic meaning means to mean, so to speak. Its function (or at least a major part of its function) is to be interpreted. We don’t produce a fever in order to get our physicians to intervene (or when we do, that's interesting). But we do tell them we feel feverish in order to get them to intervene.
Grice is right, though, to connect these two kinds of meaning. The reasons peacocks have such elaborate tails, the reason flamingoes are pink, the reason some frogs are blue, is precisely that it is (at least a major part of) the function of such phenomena to be interpreted. Costly signaling (as in all three of these examples) and warning signs (as in the third) are “designed” (i.e. evolved) to be the object of assessment and appropriate response. Peacocks and flamingoes signal their fitness by showing (respectively) that they can afford the costs of carting around so large and easily damaged a tail, or the costs of ingesting the poisonous foods that turn flamingos pink. And some frogs are blue, as is well known, to warn predators that they are poisonous and should be avoided.
In such cases a kind of automatic process of signaling and interpretation occurs. It’s equivalent to a “flag” in computer programing. Both the existence of the flag and the appropriate response to it come about automatically. The flag requires interpretation but does not require deliberation. Biological signals in general are part of a circuit of interpretation without deliberation. They’re game theory automatized. The basic idea of evolutionary game theory is that dominant strategies are selected automatically, because they are the ones that are successful, and success is an automatic selector. Signals are indeed meaningful, in a fuller sense of the term than is captured by the idea of evidence, residue, trace, relic, byproduct or some other such term implying the post-facto possibility of reconstruction a state of affairs. Evolutionary advantage accrues to organisms that can respond appropriately, relevantly, and efficiently to the state of affairs by seeing how various flags are natural ensigns meaning that such a state of affairs obtains, and to the organisms whose signals are well- and easily-interpreted.
Cybernetics began as the science of automatic communication within a system, whether biological of mechanical. Automatic communication follows many of the same principles in both domains. For example ants follow chemical trails and touch antennae when they meet each other, modifying their activities and pathways as a function of the frequency of their interactions. Deborah Gordon has dubbed this system the anternet because it turns out to work pretty much the way that the Internet automatically looks for the most efficient pathways between nodes. (The development of the Internet allowed scientists to notice the same sorts of things in the anternet.)
Now, at some point signaling and reception became more deliberative activities. Signalers became aware of the efforts and costs of signaling; receivers became aware, at least, of the signalers’ awareness of these things.
There are various evolutionary pathways that might lead to the development of non-automatic deliberation. Not all strategizing can be automatized. It is a basic theorem in automatic computation that novel situations will arise within a system that the system cannot solve optimally. Risk-taking, and eventually deliberate and therefore conscious risk taking, will sometimes lead to very high payoffs (even if they will also sometimes lead to disaster). In evolution novelty will also appear with the appearance of any new externalies (in ecology, environment, population, climate, or unexpected events).
In all such cases of genuine novelty there is no automatically dominating strategy. That’s what makes novelty novelty. The probabilities that enter into the calculation of expected utility and thereby govern decision-making are not established. They are therefore what we call subjective and not objective probabilities. As Richard Jeffrey argues, subjective probabilities are those that there is no independent way to check. Novel situations will always require subjective assessments of probabilities. (That’s what’s novel about them. Hume consistently argued that all probability was subjective, because every situation is novel.)
Once the production and assessment of signals becomes a matter for deliberation, then both the signaler and the receiver will have to take into account the fact that the signaler is choosing to signal. Conscious signaling, whatever else it signals, also signals that fact. Signals become a sign of the deliberation that produces them, and to the extent that the receiver reads them that way, she recognizes the subjectivity of the signaler, who has deliberatively and deliberately made the choice he has. Because the choice isn’t automatic, the signaler is individuated. Here we have the beginnings of an irreducible individuality, a subjectivity that isn’t simply exemplary of species-being.
By "deliberative" here, I don’t mean a necessarily lengthy, careful, or well-considered process. All I mean is that subjectivity becomes part of the equation, so that some signals become expressions of subjective attitudes and commitments. These are the kinds of signals we call emotions or emotional expressions.
It is for this reason that it’s right to say (with Anthony Kenny and those he draws from) that emotions are reasons, not causes, for action. But I would add that their function (as signals) is to give reasons to others to act. They can only give reasons for others to act if they signal honestly that they are also giving those who express them reasons to act as well (which is a standard view of emotions). Emotions therefore have to appear causal to those to whom their expression acts as a signal: they have to start looking as though they are causal. Not automatically causal, though, but quasi-causal since they are routed through the subjectivity of the signaler.
If we were to try to express the schema of the expression of emotion in its most skeletal sense it would have to contain the following recursive component: Part of what I am feeling is that I am really trying to make you see that I am really trying to make you see what I am feeling. The really-trying aspect is the part of the emotion that points to the fact, or is pointed to by the fact (they’re the same thing) that the expression of the emotion is costly.
Emotions arise from and promote intersubjectivity. If the signaler is recognized as deliberative, then he also must consider how to signal to a receiver capable of such recognition. From an evolutionary standpoint, they are therefore emergent properties of the vast and intricate kinds of cooperation predicted by non-cooperative game theory (that is game theory where cooperation isn’t a prior part of the set-up or rules, but has to come about, if it does, on its own).
Consider supplication, as a kind of gold-standard expression of desperate emotion. The costs (measured as risks) of supplication are very high. The suppliant is utterly defenseless. In this way he shows that he really wants, is really trying to make the dominant figure who has the choice of mercy or murder to see that he is defenseless, and that he really wants the dominant figure to see this. He gives such urgent reasons for being spared that they begin to him to feel like causes. They aren’t causes, but he needs them to be.
Subscribe to:
Posts (Atom)