Recently I’ve been looking through some college forums, like College Confidential, and I often see people ask something like, “I’m applying to Harvard. Are there qualities I should write about/focus on to let them know I’m the right candidate?”
This has left me confused.
“Yes! Actually, if you write about how you did X in high school, you will be guaranteed admission to Harvard, no questions asked!”
Is such a reply really what they reasonably expect to hear in response?
Such an answer seems absurd in our world, but the question I want to explore is why. Why does it seem like a ridiculous idea to accept people on the presence of one factor, and why do we hope for it anyway?
Let’s assume a world where colleges do accept people based on one metric, and maybe we can see where things go astray. In this alternate universe, let’s say Harvard accepts everyone with an IQ of 130 or higher. Taken at face value, this doesn’t seem too bad. It seems way easier on the admissions staff, and there’s less uncertainty for the students. They can easily verify if they’re eligible by taking an IQ test.
However soon after, Harvard receives a deluge of people with 130 IQ’s, and the admissions staff has more qualified people than it can admit. How surprising! The students have discovered they can cheat the IQ tests. This, in turn, makes the IQ tests ineffective as a way of filtering out the students.
Filters, like an IQ requirement, allow colleges to select for certain types of student. Theoretically. If colleges want smart students, then basing their decisions off some sort of test that smarter students score higher on should allow them to nab smarter students. In practice, most tests can be gamed, which kind of defeats the purpose of the test in the first place.
If everyone knows that some college admits everyone who passes a certain metric, then everyone will try to boost said metric (yes, I’m assuming rational students). But what if it wasn’t common knowledge? If only you knew that Harvard accepted people who had IQ’s over 130, you could easily game the system and be accepted because no one else is gaming the system along with you.
So it looks like our curious questioner from the first paragraph is looking for a comparative advantage, knowledge that other people don’t have, to boost his/her chances. It makes sense to look for a leg up on the competition. Yet, even in our alternate universe, this doesn’t make much sense. Assuming that schools were trying to filter for smart students, little good appears to come out of keeping that information secret.
But if colleges want smart students, why can’t they just say that they’ll only accept smart students? Why go through all the trouble of using something like a test that can be cheated? One problem is that “smartness” doesn’t really exist as a real object, unlike “apples” or “cars”. “Smartness” is a blanket term we use to refer to a large class of behavior/thought processes, which include things like problem solving and idea generation. If a test tells us who is “smart”, and “smart” people are good at things like problem solving, then we’re really just saying that people who do well on the test are also good at things like problem solving–there’s a correlation between between the two factors.
This is also why tests should, in theory, be a good way to screen people. If there is high correlation between a factor that is easier to measure (like test scores) and other desirable-but-harder-to-test factors (like relatively good aptitude at problem solving) then, depending on the strength of the correlation, we can be reasonably confident that most of the students who pass the test also display the other qualities we want.
However, there may be other factors which have even higher correlation with test scores than the desirable-but-hard-to-test factor, like reaction time or writing speed. This may be unexpected for the college, which chose such a test because it had good correlation with “smartness”, but weren’t anticipating people to train specifically for the test. So instead of a class filled with students who have good problem solving skills (and other “smart” behavior), the class is filled with students who are really fast at filling out bubbles and pattern-matching.
In short, the problem with basing decisions solely off something like a test score that approximates (but does not 100% correspond) to a harder-to-measure metric could face problems when people try to maximize for the test, rather than the metric the test is supposed to measure (Signaling may be a good example of hard-to-fake information.) In analogy, I’m reminded of the “nearest unblocked strategy” concept from the field of AI, where imperfect correlation leads to undesirable outcomes.
Perhaps this is why colleges have decided to take a more holistic view when it comes to gauging applicants. When students have lots of potential factors to optimize, it’s less likely they’ll be able to game them. But this also makes things more uncertain from the admissions perspective, especially when two activities or tests aren’t really comparable. As a result, students would also be more hopeful as the lack of a strict yes/no test means “there’s always a chance”.
Of course, there’s a deeper question about inequality here. It’s undeniable that not everyone is good at the same things, and not all these traits are equally helpful for solving different problems. If we had a way to ascertain these hard-to-measure-qualities directly, then we’d remove a lot of the uncertainty that currently exists about eligibility. Which means that if there was a foolproof “smartness” test and certain schools had a filter, then some people who didn’t meet the threshold would know for sure beforehand that they wouldn’t get in. (Yes, there’s an assumption here that schools are looking only for one quality, which isn’t actually true.)
If everyone is not equally adept, is it okay to exclude people from doing what they love, if they’re not very good at it? It seems like our gut reaction is, “Yes, of course, we can’t infringe on people’s rights like that!”, but our current society is not an economy based on preferences. In our world today, employers are looking for people who excel at doing things, not people who like doing them. Which also sort of seems obvious because why would you hire anyone who doesn’t do a good job?
So once again we have this disconnect between liking to do things and actually being good at them. We want to keep people happy, but we are also concerned with real-world results. Right now, we appear to be geared towards outcomes, because it seems like a more efficient way to… do what exactly? I can see that this current system we have could potentially lead to more happy people than just placing people in jobs they enjoy, or hooking them up to serotonin all day.
But what we have right now is a system that was not designed to maximize human values– our economy doesn’t have an overseer that makes sure things are headed in a more positive direction. Though bleak, it seems possible that we’re currently in a special place that could eventually break down in a race to the bottom.
Basically, trying to optimize for A while maxing out B because A and B are pretty closely related could still lead to big potential problems.