Decision Theories in Real Life

[This post goes over some basics in decision theory. Then it looks at how some alternative decision theories might play out in practice, like regret minimization and evidential decision theory. In each case, I’m more concerned with what it looks to implement an approximation of the procedure in real life, rather than the technical details]

So there’s this field of study called decision theory that examines how people make decisions.

Which, if you look at it naively, seems to be a fairly simple question. When I make a decision, I consider all the options in turn, and then I select the best one. It doesn’t seem too difficult.

Actually specifying the process, though, turns out to be a little tricky. Two points of contention are what we really mean by “consider all the options” and “best”.

One way we might be able to formalize a decision making process looks something like this:

“First, look at all the options. One at a time, imagine that you took each option. Evaluate what you think the world looks like as a result of you taking that result. Choose the option that leads to the world you like best.”

For example, if I have a choice between ice cream, waffles, and cotton candy as a dessert, I can imagine myself eating each one. Then, I see which item I anticipate I will enjoy eating the most. I may also consider other relevant factors like how each food item will affect my waistline. The point is that I’m evaluating each option by imagining what the effects of it are.

After that, I simply choose the action which leads to the most effects that I care about.

It turns out that this procedure is basically described by a decision theory called causal decision theory (CDT). I think that CDT actually does a fairly good job of describing how the majority of human decision making is done. As the name suggests, causal decision theory has us look at the things that are caused by each one of our choices and select the choice that leads to the result we like the most.

Perhaps unsurprisingly, though, CDT isn’t the whole story. There’s a fairly expansive body of literature that explores how CDT may be suboptimal on certain decision theoretic problems.

I’m not going to go into details of these theoretical problems. Partly because I’m trying to stay pragmatic, and partly because I don’t fully understand the problem myself.

What I do think might be interesting is to explore some alternate decision theories that output different answers than CDT when run on real-world problems.

Here are some alternative algorithms that might be worthy of consideration:

(To be clear, I’m not looking at the theoretical advantages that each decision theory has over the other. I’m looking at the phenomenological differences that you’d experience if you were to actually use them to determine your life choices. That is, what it feels like if you were to try and intuitively run a different way of making decisions.)     


Regret Minimization: (There’s also a more technical algorithm of the same name, which is what the link goes to. I’m using it here to mean the intuitive meaning of minimizing regret.)

Regret minimization, as the name suggests, tries to minimize regret you might feel. Explicitly, we might say that it tells us to take into account how much regret we’d feel upon not taking an action, and then choose the action whose anticipated regret is the greatest.

So instead of looking forward, we’re imagining ourselves looking back at the options.

How does this differ from CDT in the real world?

For an example, say you’re deciding whether to go on a trip with your parents or hang out with your friends. Using regret minimization, you might reason that you’ll regret not having spent more time with your parents when all’s said and done. Thus, even if it’s less “fun” than hanging out with friends, regret minimization would advise you to go with your parents.

Or, say you’re wondering whether you should go to a special conference. It’s a costly conference, but there’s some very famous people going. Using regret minimization, you see that you’d likely regret it if you didn’t go. Perhaps something fantastic will happen, plus it’s a once-in-a-lifetime event.

Plainly put, regret minimization slightly changes the naive valuation we have on events and actions. It places a higher value on unique events and higher variance situations, which seem to capture a big part of why we feel regret [citation needed].

In short, you might become less risk-averse and value unique situations more.


Evidential Decision Theory (EDT): (I may be butchering the explanation of EDT here. Please read the actual source material.)

Rather than looking at causal links, evidential decision theory asks that we choose the world that is best, conditioned on our choice.

Put loosely, I think EDT roughly prescribes the procedure “Choose actions such that you become part of the group of people who experience the stuff you care about.”

You’re also sorta trying to enforce an association between yourself and different worlds.

For example, say you are wondering whether or not you should ask your crush out.

EDT might look at the probability that people who ask their crushes out actually get accepted. Let’s say that the probability is indeed quite high.

Then, in such a case, EDT recommends that you ask your crush out.

The reasoning behind this is something like if someone told you a story about someone who asked their crush out (with no other information), you’d likely assign high probability that they were accepted (based on past data). In the same way, then, EDT says you should ask your crush out because this makes you part of the same group of people for whom the probability of acceptance is high.

The technical reason behind this something like the fact that your choice to ask your crush out is evidence (hence the name) that you’re the sort of person who gets accepted. (Because people who tend to ask they crushes out do indeed often get accepted.) Once you take an action, then you can condition on the evidence to figure out what the results will be. (This is different from a causal link.)

How does this differ from CDT in the real world?

(See the discussion on Newcomb’s Problem for a lengthy discussion on this.)

For an example, say you’re trying to figure out whether or not your planned project will succeed.

With a CDT-esque view, the question you ask yourself is “Which actions can I take now such that their effects move me closer to project completion?” If you focus on figuring out how to take actions that lead to your project being completed, that’s often in practice a framing that calls upon an inside view that can be susceptible to unforeseen circumstances.

If instead you take a EDT-esque view, the question you ask yourself becomes, “How can I take actions such that I join the group of people whose projects don’t fail?” In which case, you end up re-deriving something akin to reference class forecasting, as you’re trying to do the things that successful projects do.

EDT can be useful for priming an outside view mode of thinking, as you’d be on the lookout for ways to condition on states of the world. Compared to CDT, which tries to forward-chain, I’d expect an EDT-style of thinking to be better at planning.

In short, you might become less overconfident.


Functional Decision Theory (FDT): (I am really out of my depth here. So, um, yeah. Read the actual paper. Everything below is probably inaccurate to some extent.)

FDT is a decision theory that is sort of like Kant’s categorical imperative, but not exactly.

Best as I can tell, FDT prescribes that you follow the procedure of “Choose actions such that you are determining the output of your decision function for all potential instantiations of such a procedure”.

This tends to work out well in theoretical situations like the Twin Prisoner’s Dilemma, where you face a version of the famous game against a copy of yourself. That’s obviously the most clear-cut example of two agents running the same decision algorithm, but you can also think about it in terms of your interactions with other people.

How does this differ from CDT in the real world?

(Erm. Most of the things I can think of are sorta convoluted, so I’m stretching the definition here, so the thing I propose isn’t exactly FDT. You’ve been warned.)

But here’s an example anyway. Say you’re trying to convince your friend that the field of rationality is a great one that they should look into. A CDT-esque decision might output the suggestion to send them a lot of related materials and hoping they follow-up.

If you’re running an FDT-esque procedure, though, you might decide that your friend’s brain is pretty similar to your own. Thus, instead of trying to rely on your naive knowledge of causal models of how propagating info works, you’ll try to send info in a way that would appeal to you. As a result, you might send info in a more periodic fashion rather than a massive dump.

In short, I have no idea what I’m talking about and you should definitely read the FDT paper.


Internal Advisor Theory / Coherent Extrapolated Volition (IAT / CEV):

Internal Advisor Theory says that you should make choices depending on what an internally simulated “advisor” tells you to do. This is not a well-specified algorithm in the literature. However, I’ve found it useful as an alternative when it comes to generating ideas / paths forward. Somehow, such a reframe can prime things I would not have previously thought of.

For example, if I ask myself “What would Nate Soares recommend that I do?” I’ll likely come up with interesting ideas that my naive self wouldn’t have thought of. (Probably stuff like being unable to take excuses, powering on, and integrating Meta-you with all the other parts of yourself.)

A related idea is that of Coherent Extrapolated Volition (CEV), which is based off of the idea of what advice we’d give to ourselves, if smarter versions of ourselves gave us the advice.

I’ve found CEV to be a good way to generate self-compassion. A typical reframe asks myself the question “What would the best version of myself suggest, given my current situation (and biases)?”

With this sort of algorithm, my thinking takes on a more “benevolently paternalistic” tone. I think about myself more gently, sympathizing with my problem constraints (EX: lack of energy, aversions), and my solution search changes. I end up explicitly looking for ways to work around these difficulties.

How does this differ from CDT in the real world?

For an example, say that I am trying to get myself to exercise. A CDT-esque approach would prescribe that I exercise, but mainly for the physical benefits. CEV would prescribe that I try slowly moving into it because it correctly anticipates that I have a resistance towards exercising.

Perhaps CEV first tells me to slowly get up. I move around a little bit and warm up. I slowly ease into movement. Each step is low-friction and takes into account the bigger picture. Probably at the end, I start exercising, exactly what CDT prescribed.

In short, the end result might be the same, but it feels like CEV helps me do it for the right reasons.


Attractor Theory: The model of Attractor Theory is a little bit more than a decision theory, but it also provides a different way of considering decisions. Here, I’m referring specifically to the part about the mutability of local preferences.

Loosely speaking, Attractor Theory prescribes that you “Choose actions as if your actions also determined your ability to take future actions”.

While CDT looks at the first-order causal effects of your actions, Attractor Theory also looks at second-order effects— things like what sort of person you become, upon taking an action. In that way, it feels a little like EDT, but it’s still grounded in causality and not associations.

How does this differ from CDT in the real world?

For an example, say you can choose between finishing a project and starting an exciting new one. A CDT-esque EV calculation shows that you predict the benefits from finishing the project to be low and starting the new one to be higher. Thus, you should start the new project.

Attractor Theory might prescribe that you buckle down and finish the project anyway because it makes you more of the sort of person for whom finishing projects is a habit. (And let’s say you weigh the increased habituation as more than the new project). It correct infers that taking such an action changes you.

In short, Attractor Theory has you take into account mutable desires and potential longer-term causal chains.



It can sort of seem like I’m splitting hairs here with some of my examples of real-world applications of these decision theories. After all, with a strong enough definition of causality, it’s pretty easy to collapse all of these different theories into just a looser definition of CDT than I’ve been using here.

For example, regret is something that happens as a result of your options, so perhaps CDT should de facto take that into account. Likewise, local preference changes are also causal effects that stem from your actions. Thus there is a sense where a lot of this stuff should “already” be considered from an actual causal viewpoint.

However, I think there’s a point to be made about how humans are often fairly narrowly constrained by the framing they use. Throughout this essay, I used the wording “CDT-esque” to try and represent this narrow framing. For me, at least, I can miss out on things like how to account for my biases using the typical CDT procedure.

My typical decision process feels like a shallow CDT that only looks at the immediate causal outcomes of each of my actions.

In light of this, I think considering alternate theories can be important because they bring to light other potentially important factors we may have missed, like statistical information and what sort of person I’m becoming. It does seem like there’s relevant information my naive decision making procedures don’t consider.

Even if these decision theories end up producing the same answer (as they may often do), they may do so by using different internal parts.

And for that reason, I think trying to inhabit these different modes of thought to explore how it feels to have justifications of different strengths and categories that move us to make decisions.


One comment

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s