Wednesday, November 9, 2011

Quasi-Bayesian Placebos



The placebo effect is usually explained as “expectation of result X causes result X” as a result of “expectation and/or conditioning”, which is really no explanation at all, and hardly even a good definition. Why should expecting it make it happen? How come conditioning sometimes leads to the opposite effect?

The answer is that conditioning for drug tolerance or salivation serves a specific purpose that has been selected for. Most of the time, however, the placebo response seems to be just a consequence of poor design.

When you feel pain, “see orange”, or anything else, these brain states aren't direct representations of sensory data. They are better understood as estimates of the importance of perceiving this way. In cases where false positives and false negatives are roughly equal cost, it starts to look a lot like an approximation of a hierarchical bayesian estimator, or once you take into account relative importance, an expected utility decision engine. There are top down connections that deliver our 'prior' information so that it can be integrated. To get the placebo effect we fool it by giving it 'bad priors' – you think you were given morphine so [this part of] you expect to feel less pain, so your final pain estimate – your actual perception of pain – is lessened.

If this doesn't sound right, try it for yourself.

As I mentioned earlier, it's crucial to notice that if you're trying to hack a human mind, you gotta stop anthropomorphizing humans. For the placebo effect to work, the node representing the goal must light up (or stay dark), and that's it. Placebo doesn't require anything like a system wide coherent belief. The person doesn't have to respond with verbal probabilities that it should work, and they can be genuinely surprised when it does.  It would be more accurate to say that placebo depends on alief, but even that presupposes a somewhat coherent next level down.

We appear to be many layered, where each layer performs something like a Bayes' Law update based on the data coming from below and the priors coming from above. Acceptance of a phony prior at any point in the hierarchy can give results looking like the suggestion worked, so it's not always obvious at which level the suggestion was implemented.

There have been brain scans, stroop like tests, and clever real/simulator experiments that show that the suggestions make it far enough down to be interesting, but generally not all the way down. Of course, the wording of suggestion affects this ("believe you see x" vs "you genuinely and vividly see x"), which complicates the interpretation of the experiments.

The next post will be about how to piece these together to build to interesting effects - without even having to be deceptive.

No comments:

Post a Comment