Humans As Leaky Systems

[Fairly obvious stuff that probably lots of people are thinking about, but now put into simpler words (maybe). Basically, the idea that humans are affected by both ideas and the environment, and this is an important consideration in several models.]

The standard models of human decision-making seem to put up a barrier between the agent and the environment, where they’re treated as two separate entities. You take an action, you evaluate its impact on the environment, and then you take another action and repeat.

However, I think that many interesting ideas come up when you remove that barrier and you model humans as leaky systems. By that, I mean having a model that pays attention to how information is influencing the agent and how it’s going in and out.

Here are three examples where I think it pays to model humans as some sort of a leaky system:

 

Epistemological Hacks:

Epistemological hacks are ways that manipulate beliefs or alter our understanding of how things work in order to achieve better results.

For example, say Alice needs to file her taxes. She knows that they are due in two months, so she has a lot of time to get it done. However, she convinces herself that they’re due in a month. She sets up a “fake” calendar reminder and alters the date in her planner. As the end of the first month nears, she scrambles to try and file them. Then her dog gets sick, and she panics. At the beginning of month two, Alice realizes with relief that she has another whole month, and she easily finishes the taxes before the deadline.

There’s been a lot of discussion in this vein, but it seems that there are many situations where it can be instrumentally useful to hold false beliefs. In the above example, Alice, in an effort to combat the planning fallacy, gets herself to really think that the taxes are due early to plan for unknown unknowns.

The reason that this kind of approach works in the first place, though, is because information in your brain influences your actions. Stuff in your head changes how your head makes decisions. And on top of that, we know that our brain is biased, meaning we need to compensate in some way.

If you’re not running off a false belief, then you’ll need to bolster any belief about the true due date with the additional meta-information that your brain tends to be overconfident—something like “I know my brain doesn’t reliably process due dates, so I better get started early!” . At which point you’ll likely take functionally similar actions to the first case and also get started early.

(I think it’s also arguable that the epistemological hacks which run off false beliefs are doing a better job of compressing the same information above, seeing as the results are physically roughly the same.)

The main point here is that epistemological hacks (Light Side or Dark Side ones) operate off a model of the brain where decision-making isn’t as clearcut as “Act on the information you have” because there are additional steps (i.e., our biases) which influence the flow between information acquisition and actual action.

Thus, this places greater emphasis on what sorts of information we’re feeding the system (i.e., us).

 

Attractor Theory:

Attractor Theory, as a model for goal pursuit, roughly says that you should consider two additional sources of influence on your local preferences (which determine which actions you’re even taking in the first place): the actions you take, as well as the environment you’re in.

For example, say Bob is bed and feels sluggish. If he wanted to feel more active, he might consider going outside, where the sun is shining bright. He’s predicting that he’ll likely feel more active once he’s in a different environment.

(See the link for longer discussion and nice pictures.)

The implicit model here is one that does away with a lot of the boundary between you and the environment. Your actions change the environment, but the environment also changes you. The causal arrows of influence goes both ways.

We’re paying more attention to how the environment is affecting the agent. Of course this is because we’re operating under the assumption that the changes effected by the environment allow the agent to take even further actions. Nonetheless, we’re once again thinking in terms of how the decision-maker is being changed by things flowing into it.

 

Memetic Hazards:

A memetic hazard refers to an idea which is harmful to those who consider it.

An example might look like this: Every idea you have is mapped onto some sort of physical neural representation. While it doesn’t seem necessarily valid that each phenomenologically (i.e. from our internal perspective) “thought” would correspond to a specific neuron or anything like that, it does seem valid to state that thinking about different things could lead to different physical states of the brain. From that, it (possibly) follows that there exists a sequence of thoughts which, if you entertained, would shift around things in your head in a such a way as to physically cause you to have a stroke.

But that’s a little out there.

Another, perhaps more familiar example is that of certain worldviews like nihilism. When you’re considering arguments for why things don’t matter, it can be easy to fall into depressive or hedonistic states, which (from a more “normal” worldview) are harmful. In general, it seems like large frame / value shifts of this kind could be thought of as memetic hazards.

Once more, these sorts of things seem like they’d only work in a model where thoughts have influence over other thoughts (and, by extension, our actions). The type of rigid, fixed preferences we might Platonically assume of a standard agent doesn’t always map well onto how humans behave.

For us, the thing we consider information with is also the thing that can be influenced by information is also the thing that we use to make decisions. In a way sort of analogous to the crazy neuron example above, it also seems plausible that information in our heads could interact in negative ways, which would in turn shape other thoughts.

It’s as if we’re always playing Nomic in our own minds.

 

Conclusion:

In short, this is basically an extension of the “humans are biased” paradigm we’re all familiar with. I’m merely pointing out that lots of things are mutable and information has influence and trying to make the assumptions about how humans “really” operate more explicit.

I think modeling humans as leaky systems is “more true” than the typical agent-environment distinction. I also think that having some sort of hybrid model allows us to perform a lot better on certain tasks, which is what Attractor Theory tries to do.

There’s more here about things like the role of repetition and other considerations when making decisions or the satisfice vs optimize distinction.

But for now I just wanted to make this model explicit, seeing as it seems to be an implicit assumption in several discussions in rationality.

Advertisements

One comment

  1. […] Humans As Leaky Systems by mindlevelup – “Fairly obvious stuff that probably lots of people are thinking about, but now put into simpler words (maybe). Basically, the idea that humans are affected by both ideas and the environment, and this is an important consideration in several models.” […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s