Is Mindlevelup Even Helpful?
Here at MLU, I write a lot about general ideas about productivity and how concepts relate to one another. Most of this material tends to be pretty anecdotal and hand-wavy, in the sense that we don’t have good outside research backing up most of my claims. I have found this to be problematic when trying to justify such thoughts to other people. In the realm of phenomenology, we lack good verification tools outside of our own experiences.
When we’re talking about concepts and levers that seem to only exist in our own heads, is this valuable? Is this even coherent? I have no guarantees that the way things are represented for me is similar to you. We are making some general assumptions about how all of us work. More than that, is there value to investigating how our thoughts impact our behavior? What is the benefit of a simple mental model of how something in our brain works?
I assert that thinking about this sort of thing can be helpful, even when it doesn’t have good correlations to facts in reality. Having an imperfect view of how your brain works is still probably better than just blackboxing your model of cognition. Like the 80/20 rule, I think you get much of the value from having any sort of system in place, where refinements offer less marginal value (compared to having a model versus no model).
The caveat here is that the sort of imperfect mental models that are helpful are also the sort that are still grounded in reality in some sense. Reference class forecasting, for example, doesn’t rely on good causal models of reality (which probably offer better predictions), but merely past history. But it still uses real-world data. Much the same way, heuristics based on anecdotal experiences still rely on actual things that happen. I think such is a distinguishing feature between a mental model of this sort versus a wishful-thinking model of “how the brain should work”.
This is far from the only problem. Relying on imperfect models can be a slippery slope towards being epistemically-lax, which may lead us to incorrect beliefs about reality. Similarly, the freedom to choose our own data from our experiences has inherent selection biases, where cherry-picking the right evidence can lead to poorly grounded beliefs. Confirmation bias can lead us to self-select for what we want to see. Often, perceived improvement of X can be merely attributed to improved perception of X.
The most important factor here, however, is your behavior. No one else has access to your mental state, which means it’s the things you do that other people tend to measure you by. Where phenomenology can be a good way to experience your own lives, other people will probably take a more behaviorist approach towards you.
If using some helpful mental models allows you do lots of impressive things that you don’t think you could have done otherwise, it’s all good. Put bluntly, you don’t have to believe what I say, but you can verify what I do. To an optimizing end, I think having systems for handling the levers and switches in your brain can be helpful, if it appears like they’re actually helping.
So, what does this mean for the bulk of the content on MLU, which is probably biased in lots of ways? Though the concepts may be interesting, this is far from the best introduction to good mental models. It’s really just a helpful way to get myself to think more about these sorts of ideas. And the only one who can really tell if these ideas are relatable and helpful to you is your own experiences.