July 03, 2020
I recently read Superforecasting by Philip E. Tetlock and Dan Gardner.
Would I recommend reading this to an alternate universe self that had not read this book yet? Yes! Many of the processes that superforecasters follow apply to general problem solving and “intelligence.”
Here are two brief highlights of things I learned:
“Beliefs are hypotheses to be tested, not treasures to be guarded” (127)
A pithy framing of scientific thinking. Superforecasters are not superhumans. In fact, they’re particularly aware of how fallible they are. To combat overconfidence in beliefs, superforecasters doubt them. A healthy amount of doubt, of the need to prove you’re correct rather than assuming so, is a univeral lesson.
Quick:
A baseball bat and a ball cost $1.10 together, and the bat costs $1.00 more than the ball, how much does the ball cost?
If you’re like me, your instinct instantly said $0.10. Intuition is not always right.
In software, we can draw the parallel between what a program is supposed to do and a belief. I believe this function produces the correct Fibonacci sequence. How much confidence can you have without at least some basic testing? Even if it’s just running the function? Rigor of testing should correspond with the value of correctness. A core library function with thousands of dependencies should guarantee it does what it says it does. Displaying blog post in the correct order on this site? No one even reads it!
“All models are wrong, but some are useful” (80)
Personally attached to this quote because a college professor loved saying this while cackling. Profoundly true though. Borges’ On Exactitude in Science captures this idea. The point being that a map that is perfectly correct is also totally useless. Useful models abstract out details irrelevant to the question at hand.
Hi, I'm Billy. Welcome to my blog about programming, books, musings, etc.
Recurser W1 '18. Former engineer at Amazon.
Reach me at: [email protected]