I recently wrote an essay where I tried to explain how taking a reductionist approach allowed us to plausibly artificially create very “human” things like good literature in the future. I went on an extended argument involving supercomputers and some other sketchy things.
“Hold on,” you might say, “why do all that? Wasn’t it obvious once you said you were taking a reductionist approach? If people can be broken down into small moving things, doesn’t it immediately follow that we can use little moving things to emulate human tasks?”
Erm, that’s actually a pretty good point.
I think the takeaway here is that I’m not a full-on automated theorem prover.
I tend to have pretty good access to all our thoughts. I am not, however, automatically privy to all the logical implications of what I know.
Kids, for example, don’t immediately deduce all of Peano Arithmetic after being taught the fundamentals. Math is an easy target because it runs on logical inferences, but this is true even for everyday life. Consider the thoughts you generate when you’re vacuuming or falling asleep. New ideas come to us all the time, even when we’re not gathering new information.
In addition to learning about new things, it seems like another integral part of figuring stuff out is finding ways to link it all together. Even just the stuff we already know.
This ends up being another reason that obvious advice is still very solid. Hindsight can make such advice look simple, but could we really have generated such a thing on our own?
I think this sort of “conclusion-blindness”, where we need external prompting / additional time to really understand the implications of our knowledge also applies to motives: If you ask someone why they are doing something, you often get answers that are far more unclear compared to what they are doing. I think many people are like lazy evaluators, in the sense that they don’t answer such queries until explicitly faced with them.
In the same vein, if you see that there is a simple fix or a quick way to optimize what someone is doing, I think that (in most cases) it is highly likely they have not actually considered how to do better. (EX: Not googling “how should I exercise?” before starting exercise, not asking “Do I know anyone who’s an expert in X?” before starting X, etc.)
A really good example of this happened to me the other day:
“‘Reading’ is a very poor word,” I’d said to a friend. “We use it in lots of contexts, and it’s hard to really pin down what we mean when we say ‘read’. Consider that ‘reading’ can mean all these things:”
- The physical process of scanning the words on the page. (You can do this and still not understand what you read.)
- The actual processing of the words on the page and understanding them as they pass through your head.
- Remembering what just passed through your head after a few pages. Building a coherent continuation of the reading.
- Short-term memory of what you read. (EX: Telling a friend about it a few days later.)
- How the reading caused changes in you. (EX: How your viewpoints changed after reading it. The process by which reading alters you as a person.)
“These are all fairly distinct processes,” I’d said, “Lumping them all together doesn’t seem helpful at all!”
“Huh,” says my friend, “that’s neat. So what’s your plan for trying to figure out how to optimize your reading / language usage in light of this?”
I didn’t have an answer.
[…] brought this up in Human Incompleteness, but I’m running a bit more with the idea […]