Predictions and People:
Imagine your life at its absolute best. You know, the most objectively satisfying it could possibly be. Or, imagine your life at its absolute worst, where the situation could not get any more terrible.
Alternatively, imagine one improvement to your life right now. Or, imagine one way to make it worse.
Easier?
Answering the latter two questions seems much simpler than the first two. For us humans, it seems dealing with relative states is far easier than absolute states.
Looking at a choice to eat ice cream as something that is inherently “good” comes far more naturally than considering ways to maximize our happiness overall, or what ice cream could do to our overall lifespan (and by extension, our ability to enjoy things).
Our minds don’t naturally look at the bigger picture, extend the reasoning, or globalize the consequences.
But we’ve never really had a reason to.
Consider that in our world right now, there is always room to optimize. We’re concerned with incremental improvement, rather than trying to achieve some objective maximum. The entire art of human rationality, for example, is dedicated to merely making our decisions less wrong. Nowhere are we even close to becoming paragons of agency.
In other areas, our combustion engines hover around 20% thermal efficiency. Diesel gets up to 40%. A good amount of the US’s economic growth is due to merely shifting existing things around. India could increase its productivity 40-60% if it fully optimized its worker distribution.
The fact that our world has so much room for this type of local optimization creates an ingrained framing effect on our thoughts. Because we only have to deal with local consequences in our everyday lives, I believe “looking at the bigger picture” is actually a blind spot in our cognition.
I hypothesize that local optimization, thinking about how to make things relatively better off, is the only way we can even coherently reason about optimization at all— for the most part.
Pretend that you’re back in the “ancestral environment” several thousand years ago. You’re hungry. Having food is better than being hungry. You go to hunt something and eat it. When there’s not too many hunters, this works pretty well. It’s not until the number of hunters shoots up before you have to deal with the population dynamics. So for a good part of human existence, it was unnecessary to grasp the overall gravity of our actions. Being relatively better off was good enough.
Locally optimizing may be a good heuristic for approximating good outcomes, but it’s definitely flawed. Aside from problems with globalized action, perverse incentive structures, like multipolar traps can lead these heuristics astray.
In our world today, where overgrazing, overfishing, and overhunting are all very real phenomena, we have many examples where local optimization is no longer a good heuristic.
We can reason about the bigger picture on some level. Macroeconomics is an entire field formed from looking at aggregate results from individual actions. Similarly, game theory offers treatments of what happens when individuals compete. We’ve adopted systems that allow us to reason about this sort of big picture phenomenon, but this is still not an instinctive part of our cognitive toolkit.
When five-year-old say they want “a bajillion dollars”, I suspect they are thinking about how great it would be to buy all the things they want. I very much doubt they are considering the global ramifications of such a spending spree, or what adding such an amount to the economy would do to devalue the dollar.
Even if we’re not always considering these large-scale impacts, it doesn’t make them go away. From a superrational standpoint, we have to deal with the consequences of our actions from a global scale. As a lone vehicle, your car doesn’t add much to the margin, when it comes to carbon emissions. So what difference could it make if you, individually, decide to drive?
Of course, you and the other several hundred million people with cars are thinking the exact same thing.
The very fact that we have a term like “superrational” to even describe this sort of extended thinking means that others have approached this problem before. Perhaps most famously, Immanuel Kant’s “categorical imperative” tries to use logic to deduce morality in a similar way.
Kant has us run a thought experiment, globalizing an act in question, to see if a contradiction would arise. Other philosophical views, like consequentialism look at the end results of an action to see if aligns with our values. The common thread involves extended long-term reasoning.
The problem with such reasoning is that we cannot perform such thought experiments all the time. Thinking like this is costly, and predictions are hard.
Imagine if you had to do such globalized reasoning for every action you took. With every breath, you’d calculate what it means should everyone breathe at such a rate, and what that means for our general air quantity, and what upper bound that sets for future life. Or, imagine trying to understand the optimal path through your house to maximize its structural lifespan, conditional on other people walking the same way, and how such a lifespan interacts with your overall values.
Good luck with that.
In our deterministic universe, the butterfly effect means we’re never really off the hook. Our actions lead to more actions, which propagate endlessly outward. If my decision to wear a horse mask while walking on the street means a distracted driver hits a pedestrian, how much blame should I be assigned? Perhaps more importantly, how do I factor in such potential events into my judgments?
When we have a ridiculous amount of moving pieces in our world, how confident can we be of the long-term effects of whatever we choose?
To such an end, choosing actions for their short-term effects has the benefit of being predictable. Sure, my choice to drink water might lead to some catastrophic incident down the road, but it’ll definitely quench my thirst in the meantime. We have far greater confidence in these direct consequences coming to pass.
Unfortunately for us, most of our “frames of caring”, the length of time into the future we care about, far exceeds our ability to make accurate predictions.
Our world is often counter-intuitive. Optimal exercise includes periods of rest. Generals know you sometimes you sometimes have to lose a battle to win the war. Evaluating a company requires more than just sizing up the CEO. Not directly optimizing may be a better way of reaching your goals.
Sometimes just doing the obvious thing isn’t what you actually want.
Philosopher Amanda Askell has addressed this in her poster “Is Effective Altruism Clueless?” at EA Global 2016, where she brings up the difficulties of predicting future events. If helping people in the present makes it more likely that some horrible event takes place in a few years, we haven’t successfully acted on our values; the long-term effects would dwarf any short-term benefits. Likewise, if committing an atrocity ends up saving everyone for eternity, we may want to actually do said atrocity.
Alas, we don’t have the tools to make such detailed future predictions.
I’d like to believe that worlds where we work towards good end-states with our short-term heuristics have more correlation with future worlds where nice things happen, compared to worlds of indifference or intentional cruelty. Yet, when I can’t even predict where I’ll be one year from now, how confident can I be that the universe follows such orderly patterns?
Not all of our predictions are limited to “short” time-scales, though. Consider that when we build a house, we’re making some (relatively) long-term assumptions about its sturdiness. This is in spite of the wide array of potential activities that could happen. The most obvious reason that houses tend to stay up for a while is because they’re made out of sturdy materials. Compared to marshmallows, for instance, wood and brick last much longer.
I think an interesting secondary reason is that society has a general structure that enforces house-longevity. Our streets are designed to not go straight through houses, there’s a taboo against burning houses down, in history houses are always sturdy, etc. There is common knowledge between all parties that houses are long-lasting things.
Most of the important moving parts in our world center around the actions of other people. If we have some sort of structure in place that ensures mutual understanding, we can have greater confidence in such predictions. So there appears to be certain classes of actions we can be more sure of.
Tying this back to the concept of a subtle framing effect, I believe it’s important to realize that we almost always deal with relative states. Short-term heuristics are helpful because we can identify futures that are relatively better off than where we are now, with good certainty. (Note that such futures would be “objectively” better, compared to our current state.)
This means that when we talk about concepts like “fairness” and “justice”, we’re probably inherently biased. When we can’t actually conceive of objective states, how coherent is it for us to talk about terms like “in a perfectly just world”?
I don’t actually know.
For all that I’ve knocked our reasoning skills, we humans still somehow do a fairly good job of things. Though short-term heuristics can be error-prone, I’m not sure we can do much better. As people, we go through life sequentially, in chronological order; so does everything else.
So we have to go through things one step at a time. Assuming we had some sort of Grand Master Plan, it too would be numbered sequentially. When I look at it that way, I feel a little better about having The Next Action as a general heuristic.
And writing this entire thing seemed like a fairly good Step One of lots and lots of helpful plans. So here we are.
That’s humans for you.
[…] the planning fallacy has shown that we tend towards overconfidence on our future predictions. The difficulty of predicting far future events, however, may justify part of such discounting; but it brings with it more […]
LikeLike
Predicting, in general, is very hard. Especially when it’s about the future.
I’ve recently finished Nate Silver’s The Signal and The Noise, which goes into length about predictions. One of the chapters talks about how we forecast the weather. Indeed, most weather forecasting models are deterministic, so what does “30% chance of rain” even mean?
The thing is, even the slightest differences can have large effects, the butterfly effect. And human behavior is even worse. In general, people are not like the weather. We understand the physical mechanisms behind weather to the extent that we can make deterministic predictions, if we knew the position of every atom. In fact, we don’t even need every atom for the weather – approximations are good enough, for the most case.
But people are different. Even if we did know how the human brain worked perfectly, it would still be hard to predict human behavior. To simulate the human behavior of the planet requires simulating the actions of billions of people, and localized approximations won’t work.
Another thing about short and long time scales is near/far thinking. Robin Hanson has written about this at http://www.overcomingbias.com/2008/11/abstractdistant.html and http://www.overcomingbias.com/2009/01/disagreement-is-nearfar-bias.html. You might want to take a look at this if you’re interested.
LikeLike
Hi CJ,
You’re on point here:
Robin Hanson was indeed on my mind when I wrote this (although I will be checking out the two links on OB, which are new to me). At EA Global, he was on a panel on forecasting, which set my gears running. Some other philosophical critiques of consequentialism based off the inaccuracy of forecasting from Amanda Askell rounded out my mental space.
Phil Tetlock is another name which pops up a lot when predictions are the topic of discussion. Have you read Superforecasting?
LikeLike
Hi Owen, I haven’t read Superforecasting yet, though it’s on my reading list. I’ll take a look at it after I finish what I’m reading at the moment, thanks!
LikeLike