On Models

The word “model” gets tossed around a lot in the rationality community, and, for quite a while, I really wasn’t sure what it was referring to. Even now, I’m wondering if we’re all using roughly the same definition. I worry that we could be throwing around words like “mental models” and “shared model building” because they seem like code words for ingroup signaling, rather than concrete handles for different concepts.

In this short essay, I’ll be trying to lay out some basics of what I think models are about, and if this all goes horribly wrong, maybe someone can come along and give a better explanation.

So. Models.

Models represent your understanding of how things work. Put another way, a model is something that says, “If you give this input to this system, then it will output this other thing.”

model

Physical systems give us some simple examples:

If you see your pen roll off the table, your model of gravity (and how things interact with it) tells you that you’ll find the pen on the floor (and not suspended in the air).

If you add some ice to a bowl of warm water, your model of thermodynamics tells you that the bowl of water will be cooler than before.

If you tap a solid object, your model of sound and vibrations tells you that you’ll hear a sound in response.

Models can be thought of as implicit or explicit. In the above examples, it’s likely that you built up those physics models as a result of your experiences within the world; you’ve passively developed a sense for what would and wouldn’t work. In contrast, you can also have models that are formed, link by link, as a result of a more verbalized understanding.

For example, many of our models regarding health and nutrition might be things that we learned secondhand. You might have read somewhere that “long periods of sitting have deleterious health effects independent of adults meeting physical activity guidelines”, so your model for the health effects of sitting down outputs “deleterious health effects”.

One thing explicit models might be lacking in, though, is in granularity. Because the very nature of trying to transmit information can be lossy (especially when it comes to translating what one learns from experience), there can be a fuzziness to models formed from explicit chains of reasoning when supplementary information is lacking.

For example, in the above model regarding sitting, “deleterious health effects” is still an output that remains vague unless you also have a model of the background knowledge about what those health effects are, and what they actually look like when they afflict someone.

Models are about expectations. The things you expect and predict can be thought of as a consequence of the models you have.

For example, if you spend a lot of time on social media, you’ll likely develop a good intuition of which times are best to post, such that you get the most engagement. Then, if you imagine posting on a Friday evening, for example, your expectation of how many Likes you’ll get is your model.

Surprise is an indicator of model inadequacy. When something is unexpected, it’s something previously unaccounted for by our understanding of how the world works. This is a good time to take a step back and reevaluate our models because we want our models to be accurate; they’re useful to the extent that they help us navigate the world.

For example, if your best friend gets visibly distressed at what you thought would be a harmless practical joke, this is pointing to something off with your current understanding of how your friend operates. There is very likely something you don’t know about them.

Models are in our heads, and they are almost always “wrong”. Having human brains means that there are certain limits on how much information we can be storing, recalling, or using. We’re always going to be trading off granularity for compression.

However, the hope is that these tradeoffs proceed independently of accuracy and instrumentality. Even when we’re working with highly simplified models, we’d like them to be helpful for the sorts of situations we find ourselves in. For example, a model of people as NPCs following a script is clearly horribly representing the truth, but it might provide insights into actions often enough to justify its use.

So, as some people are fond of saying, “All models are wrong, but some models are useful.”

And that’s a quick crash course on models.

Thus, when we do talk about “shared model building”, I hope we can talk about creating greater group consensus on our understanding of systems and how they work. I’d like for it to mean that we creating common background knowledge on these expectations.

And not, say, loaning out miniature replicas of the Empire State Building.

Advertisements

One comment

  1. […] On Models by mindlevelup – A model is something that says, “If you give this input to this system, then it will output this other thing.” Implicit and explicit models. Models are about expectations, surprise is a sign of model inadequacy. […]

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s