[In this post, I’ll try to tie together all my current thoughts on rationality into a coherent synthesis. This summarizes the ideas of debiasing, habits, akrasia, and ideas found in Realistic Expectations and Actually Practicing.]
I’m interested how rationality can help us make better decisions.
Many of these decisions seem to involve split-second choices where it’s hard to sit down and search a handbook for the relevant bits of information—you want to quickly react in the correct way, else the moment passes and you’ve lost. On a very general level, it seems to be about reacting in the right way once the situation provides a cue.
Consider these situation-reaction pairs:
- You are having an argument with someone. As you begin to notice the signs of yourself getting heated, you remember to calm down and talk civilly. Maybe also some deep breaths.
- You are giving yourself a deadline or making a schedule for a task, and you write down the time you expect to finish. Quickly, though, you remember to actually check if it took you that long last time, and you adjust accordingly.
- You feel yourself slipping towards doing something some part of you doesn’t want to do. Say you are reneging on a previous commitment. As you give in to temptation, you remember to pause and really let the two sides of yourself communicate.
- You think about doing something, but you feel aversive / flinch-y to it. As you shy away from the mental pain, rather than just quickly thinking about something else, you also feel curious as to why you feel that way. You query your brain and try to pick apart the “ugh” feeling,
Two things seem key to the above scenarios:
One, each situation above involves taking an action that is different from our keyed-in defaults.
Two, the situation-reaction pair paradigm is pretty much CFAR’s Trigger Action Plan model, paired with a multi-step plan.
Also, knowing about biases isn’t enough to make good decisions. Even memorizing a mantra like “Notice signs of aversion and query them!” probably isn’t going to be clear enough to be translated into something actionable. It sounds nice enough on the conceptual level, but when, in the moment, you remember such a mantra, you still need to figure out how to “notice signs of aversion and query them”.
What we want is a series of explicit steps that turn the abstract mantra into small, actionable steps. Then, we want to quickly deploy the steps at the first sign of the situation we’re looking out for, like a new cached response.
This looks like a problem that a combination of focused habit-building and a breakdown of the 5-second level can help solve. (I’ll be doing a write-up on habits soon that examines this in more detail.)
In short, though, the goal is to combine triggers with clear algorithms to quickly optimize in the moment. Reference class information from habit studies can also help give good estimates on how long the whole process will take.
As a result of implementing rationality in this way, I’d plausibly expect to see improvements in (corresponding to the earlier examples):
- Good Argument Norms: (Cooperative attitude in arguments)
- Calibration: (Improved time estimates for task completion. Incorporating base rates when making decisions.)
- Personal Efficacy: (Ability to achieve goals (“doing what you want to do”))
- Internal Stability: (Debug internal conflicts / aversions)
But these Trigger Action Plan-type plans don’t seem to directly cover the willpower related problems with akrasia.
Recall that akrasia is this weird thing that happens when you know you should be doing something, but you don’t really want to. Just forcing yourself to power through doesn’t solve the root problem. There’s an internal dialogue that needs to happen, teasing apart the aversions and “ugh” fields that you might be ignoring.
Sure, TAP’s can help alert you to the presence of an internal problem, like in the above example where you notice aversion. And the actual internal conversation can probably be operationalized to some extent, like how CFAR has described the process of Double Crux.
But most of the Overriding Default Habit actions seem to be ones I’d be happy to do anytime—I just need a reminder—whereas akrasia-related problems are centrally related to me trying to debug my motivational system. For that reason, I think it helps to separate the two. Also, it makes the outside-seeming TAP algorithms complementary, rather than at odds, with the inside-seeming internal debugging techniques.
Loosely speaking, then, I think it still makes quite a bit of sense to divide the things rationality helps with into two categories:
- Overriding Default Habits:
[These are the situation-reaction pairs I’ve covered above. Here, you’re substituting a modified action instead of your “default action”. But the cue serves as mainly a reminder/trigger. It’s less about diagnosing internal disagreement.]
- Akrasia / Willpower Problems:
[Here we’re talking about problems that might require you to precommit (although precommitment might not be all you need to do), perhaps because of decision instability. The “action-intention gap” caused by akrasia, where you (sort of) want to something but you don’t want to also goes in here.]
Still, it’s easy to point to lots of other things that fall in the bounds of rationality that my approach doesn’t cover: epistemology, meta-levels, VNM rationality, and many other concepts are conspicuously absent. Part of this is because I’ve been focusing on instrumental rationality, while a lot of those ideas are more in the epistemic camp.
Ideas like meta-levels do seem to have some place in informing other ideas and skills. Even as declarative knowledge, they do chain together in a way that results in useful real world heuristics. For example, meta-levels can help you ask what the ultimate direction of a conversation is. Then, you can table conversations that don’t seem immediately useful/relevant and not get sucked into the object-level discussion.
At some point, useful information about how the world works should actually help you make better decisions in the real world. For an especially pragmatic approach, it may be useful to ask yourself, each time you learn something new, “What do I see myself doing as a result of learning this information?”
There’s definitely more to mine from the related fields of learning theory, habits, and debiasing, but I think I’ll have more than enough skills to practice if I just focus on the immediately practical ones.
So this year I’ll be especially focusing on the behavior / habit side of rationality. To that end, there a few fields that all seem related. I’ll be surveying literature across them this year, and I plan to write some primers throwing together these ideas, along with my own experiences.
- Lots and lots of examples
- Surveys of papers on habit research
- Attempts to create / maintain helpful TAPs for rationality techniques.