Home

Superforecasting

July 03, 2020

I recently read Superforecasting by Philip E. Tetlock and Dan Gardner.

Would I recommend reading this to an alternate universe self that had not read this book yet? Yes! Many of the processes that superforecasters follow apply to general problem solving and “intelligence.”

Here are two brief highlights of things I learned:

Scientific Thinking

“Beliefs are hypotheses to be tested, not treasures to be guarded” (127)

A pithy framing of scientific thinking. Superforecasters are not superhumans. In fact, they’re particularly aware of how fallible they are. To combat overconfidence in beliefs, superforecasters doubt them. A healthy amount of doubt, of the need to prove you’re correct rather than assuming so, is a univeral lesson.

Quick:

A baseball bat and a ball cost $1.10 together, and the bat costs $1.00 more than the ball, how much does the ball cost?

If you’re like me, your instinct instantly said $0.10. Intuition is not always right.

In software, we can draw the parallel between what a program is supposed to do and a belief. I believe this function produces the correct Fibonacci sequence. How much confidence can you have without at least some basic testing? Even if it’s just running the function? Rigor of testing should correspond with the value of correctness. A core library function with thousands of dependencies should guarantee it does what it says it does. Displaying blog post in the correct order on this site? No one even reads it!

“All models are wrong, but some are useful” (80)

Personally attached to this quote because a college professor loved saying this while cackling. Profoundly true though. Borges’ On Exactitude in Science captures this idea. The point being that a map that is perfectly correct is also totally useless. Useful models abstract out details irrelevant to the question at hand.

Superforecasting Plan of Attack

  1. Unpack question into components. Seek to decompose complex problems. While the initial question might be difficult to answer outright, properly decomposing the question into constituent parts and then solving those smaller parts leads to the solution. I think this is generally true for any problem too complex to fit in human memory. An example in software is the UNIX philosophy of “do one thing well.”
  2. Distinguish Sharply Between Known and Unknown. Unknowns are risks that should be researched further and minimized. Ideally, unknowns can be abstracted away.
  3. Leave No Assumptions Unscrutinzed. Assumptions are things that are fixed or irrelevant to the problem at hand. Many universal assumptions are totally invisible.
  4. Adopt outside view and downplay the specialness of the occasion. For example, how tall can we expect the son of an NBA player to be? First, we’d need to look at the distribution of heights for adult males.
  5. Adjust based on the inside view. Bayesian reasoning. How do we update our prior based on the outside view for the special problem at hand? Then, adjust based on the height of players in the NBA and how heights correlate from father to son.
  6. Explore similarities/differences between your view and others’. Gets at “the wisdom of the crowd” that was also expored in the book. The idea is that, with caveats, a crowd of people will tend towards the correct result because independent, correct thinking will directionally point at the correct solution while biases and error will tend to cancel out.
  7. Synthesize. Compose. Put together all the components of the questions into one final answer.
  8. Express prediction precisely. Preferably using percentages. Hindsight bias has the tendency to distort future reflection. Committing thoughts on paper is one way to counter this.

Billy Kaplan

Hi, I'm Billy. Welcome to my blog about programming, books, musings, etc.

Recurser W1 '18. Former engineer at Amazon.

Reach me at: [email protected]