Is Mindlevelup Even Helpful?
Here at MLU, I write a lot about general ideas about productivity and how concepts relate to one another. Most of this material tends to be pretty anecdotal and hand-wavy, in the sense that we don’t have good outside research backing up most of my claims. I have found this to be problematic when trying to justify such thoughts to other people. In the realm of phenomenology, we lack good verification tools outside of our own experiences.
When we’re talking about concepts and levers that seem to only exist in our own heads, is this valuable? Is this even coherent? I have no guarantees that the way things are represented for me is similar to you. We are making some general assumptions about how all of us work. More than that, is there value to investigating how our thoughts impact our behavior? What is the benefit of a simple mental model of how something in our brain works?
I assert that thinking about this sort of thing can be helpful, even when it doesn’t have good correlations to facts in reality. Having an imperfect view of how your brain works is still probably better than just blackboxing your model of cognition. Like the 80/20 rule, I think you get much of the value from having any sort of system in place, where refinements offer less marginal value (compared to having a model versus no model).
The caveat here is that the sort of imperfect mental models that are helpful are also the sort that are still grounded in reality in some sense. Reference class forecasting, for example, doesn’t rely on good causal models of reality (which probably offer better predictions), but merely past history. But it still uses real-world data. Much the same way, heuristics based on anecdotal experiences still rely on actual things that happen. I think such is a distinguishing feature between a mental model of this sort versus a wishful-thinking model of “how the brain should work”.
This is far from the only problem. Relying on imperfect models can be a slippery slope towards being epistemically-lax, which may lead us to incorrect beliefs about reality. Similarly, the freedom to choose our own data from our experiences has inherent selection biases, where cherry-picking the right evidence can lead to poorly grounded beliefs. Confirmation bias can lead us to self-select for what we want to see. Often, perceived improvement of X can be merely attributed to improved perception of X.
The most important factor here, however, is your behavior. No one else has access to your mental state, which means it’s the things you do that other people tend to measure you by. Where phenomenology can be a good way to experience your own lives, other people will probably take a more behaviorist approach towards you.
If using some helpful mental models allows you do lots of impressive things that you don’t think you could have done otherwise, it’s all good. Put bluntly, you don’t have to believe what I say, but you can verify what I do. To an optimizing end, I think having systems for handling the levers and switches in your brain can be helpful, if it appears like they’re actually helping.
So, what does this mean for the bulk of the content on MLU, which is probably biased in lots of ways? Though the concepts may be interesting, this is far from the best introduction to good mental models. It’s really just a helpful way to get myself to think more about these sorts of ideas. And the only one who can really tell if these ideas are relatable and helpful to you is your own experiences.
On imperfect models: an imperfect model is far better than no model at all. My favorite example for this is the efficient market hypothesis. If you had to explain what EMH was to a stranger in under thirty seconds, it would be better to just say that EMH is “in the long run, no one can beat the market” rather than “over a long period of time, while some investors can gain a temporary advantage by using insider information, in the long run, no one can outperform the market unless they take higher risk, as the prices always change to reflect relevant information.”
No, the more accurate version of EMH doesn’t fit in a bumper sticker. It also takes more than thirty seconds to explain. Just because the imperfect model is wrong, doesn’t mean we should throw it away – it would be better to say that the earth is round rather than the earth is flat, even though we know now that the earth is shaped like an irregular oblate spheroid.
I think you nailed the last paragraph. It seems to be that MLU helps the writer as much as, if not more than the reader. Articulating ideas always helps make them more concrete, and in this sense, it helps the writer figure out what they believe in.
LikeLike
I misread your first sentence to be: “One imperfect model: an imperfect model is better than no model at all” and I started thinking about meta-models…
That aside, your economics analogy is another good example; I’m reminded of Fermi estimates, too, in this field, where we care more about the overall “size” of the thing, rather than its specific measurements, so approximates suffice.
I don’t know much about economics, however. I’m just going through A Random Walk Down Wall Street right now, so I expect I’ll be seeing more about efficiency and predictions with regards to markets soon.
LikeLike
Hi Owen,
(I can call you Owen, right?)
Do read about economics. It’s a field replete with wonderful examples about human behavior in general. Economics, surprisingly, seems to share more overlap with cognitive psychology than something like mathematics.
(meta-models sound like something nice to think about by the way)
LikeLike
Hi CJ,
Feel free to call me Owen. (I’m unsure if I explicitly mentioned that somewhere on MLU; have we met elsewhere at some point?)
I’ve been exposed to some behavioral economics (via Kahneman and Schelling), so I’ve gotten a taste of the flavor that I think you’re gesturing at.
LikeLike
[…] There’s an essay on that. […]
LikeLike