Realistic Expectations

Realistic Expectations:

     Lately, I’ve realized that there’s something I’ve been fundamentally doing wrong in my head when it comes to building good mental architecture:  Whenever I decide to integrate a new habit of mind, I get easily frustrated when it doesn’t stick after a few days.  This has been a recurring occurrence.

     I’ve finally realized that my expectations are the culprits here.

     To judge how long it takes to start utilizing a certain heuristic, I appear to have been using an intuitionist approach, classifying such habits under a “mental stuff” label, because it seems like mental notions should be easier to learn.

     Perhaps more concretely, I’ve been fooled because mental notions feel like declarative knowledge, but they’re really more procedural.  Knowing about pre-mortems seems easy; I just link it to other concepts under the “planning” label in my head.  But this misses the point that the whole reason I even understand pre-mortems is to actually use it.

     I confess that I’ve had a similar experience with mathematics a while back.  For much of the course, I merely reviewed my notes, letting my brain run over the same grooves.  The familiarity of the concepts gave me the illusion of understanding; yes, I could grasp the main ideas, but comprehension and capability are miles apart.  When it came time to independently solve problems, I was totally lost.

     What appears needed in these situations where certain topics “masquerade” as declarative knowledge (when you actually care about the procedural part) is to find analogs to concrete procedural skills.  For example, I have much better estimates on how long it will take to learn an instrument, a new magic trick, or a sport.  In my mind, the aforementioned actions feel very “physical”, rather than “mental”.  This may appears to trigger a reframe.

     The key, then, is to renormalize my expectations for learning new habits of mind, by drawing parallels to analogous skills where I have good estimates. Reframing the situation in this way makes it less frustrating when I fail to develop agency in a few days.  Learning other skills have timelines of weeks or months, and that’s with solid practice.  

     To think otherwise for learning mental skills would be unrealistic.

     Similarly, reference class forecasting looks at the “base rate” to make predictions.  Statistically speaking, I’m probably not an outlier, so using the average can be a good predictor of my own performance.  When it comes to habit change, I can see how likely I am to succeed, or how long, by looking at people as a whole.

     I just looked up the base rate for habit change.  Looks like lots of people cite the Lally study which had an average length of 66 days to ingrain a new habit.  The data ranged from 18 days to over 250 (the study ran for just 12 weeks, so this was extrapolated data).  

     Some scientists surveyed were also fairly pessimistic on the timelines for breaking a habit, from two months to six months.

     Welp, I’m definitely going to have to recalibrate now.

     Learning new mental tricks aside, there’s a related problem I’ve been bumping into often, regarding my thoughts in general:  I can’t seem to hold all of them in my head at once.

     What I’m dubbing the “transience of thought” is basically the where I forget lots of helpful things I read/encounter.  Progress isn’t linear.  Many of my helpful thoughts fall on the wayside, never to be seen again.  Or, I’ll forget most of the great insights from a book I recently read.

     Once again, this appears to be a problem of expectations.  I’m sure that with the right amount of reinforcement and repetition, these ideas can be more deeply ingrained.

     This has led me to think about what it feels like to have really subsumed a mental heuristic.  I took a look at some mental tools I already use, at a deep level, and tried to describe how they feel:

     Upon examining my optimizing mindset:

     “Having a mental habit deeply entrenched doesn’t feel like I’ve got a weaponized skill ready to fire off in certain circumstances (as I would have hypothesized).  Rather, it’s merely The Way That I Respond in those circumstances.  It feels like the most natural response, or the only “reasonable” thing to do.  The heuristic isn’t at your disposal; it’s just A Thing You Do.”

     In a distant way, it looks like our intuitive models and expectations have some effect on our actual behavior.

     If I have unrealistic timeframes for mental habit change, then I’m more likely to get frustrated at not seeing early results.  I’m basically the analog of the dieter who quits after a few days.  (Expectations aside, there’s also the more obvious notion that our mental tools are what we use to respond to situations so of course they’d affect our behavior.)

     One recent idea I’ve been flirting with concerns Kahneman’s System 1 and System 2, or the “rider and the elephant” models of the mind.  In such models, the “rational” side is always portrayed as domineered by the more “primal” side; in any case, there is always an implication of a struggle for dominance between both sides.

     Though this may be an accurate depiction of behavior in cases of time-inconsistent preferences, I can’t help but wonder if they set up a self-fulfilling prophecy for our “rational” side to ultimately lose when confronted with “temptations”.  The implied power struggle between the sides, as a whole, also seems damaging.  

     I’d like to be able to reconcile all of my different goals, not fight myself at every turn as different urges try to assert themselves.  

     I’d love to see some models with similar descriptive power, or analogs to actual brain functions (the models should match reality to some extent!) but with a more positive spin on how the two sides of the brain interact.  A more goal-based model (unlike Kahneman’s, which is task/function-based, I believe) might involve collusion with temporal selves.  I’ve heard other talk about visualizing their life as a series of Prisoner’s Dilemma’s against their past and future selves, for example.

     Such a direction looks ripe for lots of new analogies and (hopefully) some new perspectives on bringing together our goal systems in better aligned ways.  


  1. The “trilobots” aren’t just tracking the people above them, they’re additionally in communication with one another to calculate
    their respective distances and the best routes to perform their targets .
    Not less than, that is the argument superior
    by most mathematicians.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s