How you make decision in an uncertain world!

Posted on September 14, 2011


Confidence in Initial Beliefs

Scientists like mathematical models because they enable them to explore complex questions while relying on a relatively simple perspective – a mathematical perspective. In what follows we rely on a very simple math model – one that even Homer Simpson would understand – to explore how we make decisions in a complex and  uncertain world.

Popular ideas about knowledge evolved from the simple supposition that knowledge was a direct reflection of reality, to suppositions that knowledge was a reflection of rational, semi-rational, and irrational editing mechanisms (Hawking’s models) processing small samples of incomplete and transformed or distorted data (observational checkpoints surrounded by various regions of uncertainty). Although this provides a general idea of how knowledge is constructed in social science, it’s still vague, and we need to make the ideas clearer by being more explicit about the syntax and about observational checkpoints. The syntax is relatively simple, so even if you’ve forgotten your high school algebra, you will have no trouble following the ideas. The syntax reflects the commonsense notion that the higher your confidence in your initial belief or estimate, the less impact new evidence will have in influencing your final belief or estimate. The syntax states:

Ef = f[C, Ei + (1 – C)g(E1, E2, . . . , Ej . . . , En)]


In other words, a revised or final estimate (Ef) of, say, how intelligent you are, depends on your confidence (C) in the initial estimate of your intelligence (Ei), adjusted by distributing the remaining confidence (1 – C) over subsequent evidence or estimates, such as a failed exam (E1), criticism by friends for doing something dumb (E2), forgetting an important appointment (Ej), getting an A on a statistics exam (En), with the flow of that evidence being edited by some rational, semi-rational, or nonrational editing mechanism (gs).

Decision Making

Now here is the most important part of the model. In this particular model of decision-making, or belief revision, C can take values ranging from 0 to 1. Therefore, if C is very high (for example, 1), then (1 – C) is very low; in fact, it’s zero, so when you multiply the subsequent evidence by zero, you get zero. So the model reflects the power of initial beliefs (or core assumptions, or biases) to act as feedforward mechanisms canceling, or reducing, the impact of subsequent information—serving to help defend the island of ‘truthiness’. When confidence is very high you’re dealing with a bigot. In contrast, when confidence in initial belief is very low, (1 – C) is high, and so subsequent evidence (E1, . . . , En) and the particular editing mechanisms employed (g) have a major impact on your revised or final belief. Now you can see why such models are called anchoring and adjustment, or adaptive expectation models—they permit you to describe and predict under what conditions subsequent information will influence initial beliefs. Furthermore, the model explicitly provides for the assumption that any given decision is the result of a weighting of the adjustment mechanism [(1 – C)g(E1, . . . , En)], relative to the weight given to the anchor or initial estimate [C, Ei].

Take a minute or two now and play with the model. Notice what happens if your initial confidence is low, and if your editing mechanism (g) gives special weight to the latest news. In this case, En (the A grade in statistics) becomes the major influence, and your final estimate (Ef) is that you have high intelligence. But suppose the same conditions prevail, except that the operating editing mechanism (g) is not a recency bias, as above, but a primacy bias in which early news gets the greatest weighting. Then you would give the greatest weight to E1 (A failed exam), and so your final estimate would be that you have a low IQ. If confidence in the initial estimate is average (e.g., 0.4 to 0.5), your initial estimate serves as a model anchor; thus, subsequent estimates will still have some effect, but not as much as when confidence is low (e.g., 0 to 0.1).

We hope this example gives you a feel for how a simple ‘model’ helps us explore a complex problem. Notice too that we try to make our assumptions clear ( remember Simon’s mantra no conclusions without assumptions), so the reader – now familiar with the author’s beliefs or biases – can decide whether they buy the author’s model or not.


Posted in: Sciencing