Explaining history (math, physics)? Is it even possible? [@ myth #7]

An explanation of an event is a post-hoc interpretation, not something that is objective in an epistemological sense. Of course, independent observers (for example teachers) may concur in a particular explanation, there is however no guarantee that agreement means that an explanation must be true. Giving explanations of events is a psychological act, albeit on the basis of possibly expert domain knowledge. That psychological part makes it an issue in the category of myth #7, the misconceived idea of formulating educational goals in pseudo-psychological words, as things in the heads of pupils. Lots of stuff to blog about.

What triggered this blog is an unpublished talk by Amos Tversky to a gathering of professional historians, on the subject of how they turned historical uncertainties into convincing and almost deterministic historical narratives. Historian’s magic: turning around what a priori could not be predicted into beautiful post hoc explanations. The talk by Tversky is mentioned in the double biography of Tversky and Kahneman by Michael Lewis 2017 pp 205-208.
My hunch: isn’t this also what we ask examinees to do, in some/many cases? Anyway. This historian case is only an example of a problem in many (applied) sciences. Scientists thinking up hypotheses; physicians searching for diagnoses; judges striking verdicts.

  • Lewis, p. 208 Historians imposed false order upon random events, too, probably without even realizing what they were doing. Amos had a phrase for this. ‘Creeping determinism,’ he called it—and jotted in his notes one of its many costs: “He who sees the past as surprise-free is bound to have a future full of surprises.”
    ( .. ) The historians in his audience of course prided themselves on their ‘ability’ to construct, out of fragments of some past reality, explanatory narratives of events which made them seem, in retrospect, almost predictable. The only question that remained, once the historian had explained how and why some event had occurred, was why the people in his narrative had not seen what the historian could now see. “All the historians attended Amos’s talk,” recalled Biederman [a Stanford psychologist, bw], “and they left ashen-faced.”

What about mathematics and physics? I quote from Lewis, 2017, p. 208:

  • After he had heard Amos explain how the mind arranged historical facts in ways that made past events feel a lot less uncertain, and a lot less unpredictable, than they actually were, Biederman felt certain that his and Danny’s [Kahneman] work could infect any discipline in which experts were required to judge the odds of an uncertain situation — which is to say, great swaths of human activity.

Uncertain events are troublesome to us, simple human beings. It turns out our judgments of or on probability more often than not are wide off the mark, and might differ enormously dependent on either positive or negative wordings of the probability (glass half empty; risk not to survive 10%). One of the problems here is our inability to reckon with base rates, or even to be aware of them. Many human heuristics are not helpful at all in the complex situations of modern culture. A powerful essay on these heuristics and biases is Tversky & Kahneman 1974, in Science.

How might knowledge of these heuristics and biases apply to achievement test item design? I can’t remember ever having seen a serious treatise or research article on the subject. Google somewhat more? Try your luck on a recent blog by Andy May (is that correct?) on the knowledge and skills debate in history education.

Back on topic. Many essay examinations boil down to explanations or other ways of construction of narratives. How do you judge those works? There is a serious risk that what you will be judging is not (only) curricular aligned knowledge, but (also) the quality of the explanation or narrative, how convincing it is. Then you walk straight into the Tversky trap of ‘creeping determinism’. What alternatives do you have? You can’t look back into the brains of the writers! You might regard the pupil’s brain as a black box, and simply rate convincingness; the problem with that approach is that you run into the problematic of Myth #4: ‘It’s okay for intelligent pupils to have an edge.’ No, that’s not okay; if the goal is to test for differences in intelligence, then use a valid intelligence test.
The work of Tversky and Kahneman shows that there is a methodology to research heuristics and biases. That methodology might be used to develop a techniques that will enable valid assessments of explanations and narratives, as well as instructional methods of course. Is this a viable alternative? I do not know, at least not yet.

Educational objectives formulated in the pseudo-psychological terms of being able to explain or to conceive narratives might be highly problematic in ways not even suspected bij professionals in the specific domain.

[on details the text will be amended or complemented]


Daniel Kahneman (2011). Thinking fast and slow. Farrar, Straus, and Giroux. a review

Michael Lewis (2017). The undoing project. A friendship that changed our minds Norton. Friends: Amos Tversky, Daniel Kahneman. info

Amos Tversky (1972). Historical interpretation: Judgment under uncertainty. unpublished talk, used by (including quotes) Lewis, 2017, 205-208. books.google text fragment

Amos Tversky & Daniel Kahneman (1974). Judgment under Uncertainty: Heuristics and Biases. Science, New Series, 185, No. 4157. (Sep. 27, 1974), 1124-1131. pdf [This article, of course, is reprinted in many edited volumes on decision-making]

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s