Ludwig von Mises Institute
Israel at the UN
Cascade Policy Institute
Voluntary Trade Council
Dr. George Reisman
Mises Economics Blog
The Distributed Republic
The Angry Economist
Civilian Gun Self-Defense
In The Pipeline
Free Money Finance
Tests vs. the Real World
Last Saturday was national Mensa Testing Day, and at the urging of a friend, I decided to go take the test. I won't have my results for several weeks, but I thought the contrast between the test and the real world was interesting enough to write about.
The most striking difference between the way they test for intelligence and the way intelligence is used in the real world is that the Mensa sub-tests have time limits… and they're short! You have to be fast even to be able to complete all the questions. The proctor explained that they don't deduct points for incorrect guesses, so the incentive is clearly to attempt as many questions as possible.
In the real world, haste is bad. If I'm working on a difficult problem at work it is absolutely essential to work slowly and methodically. Getting it wrong means not only that you'll have to fix it later, but also that you'll have to spend time — usually a lot of time — debugging something that shouldn't have been broken in the first place. In the real world the penalty for being wrong is much higher than the reward for being right! It is better to go slowly or even to give up than to arrive quickly at the wrong answer.
As I was taking the test I was surprised (although in hindsight it is sensible) at the number of calculation questions. At work I am surrounded — sometimes literally — by computers. We do not calculate by hand. It simply doesn't make sense to do so when computers are better and faster at calculation than humans are. Naturally, my manual calculation skills are way off from when I was in school. (I think my mathematical reasoning skills are still intact, but there was little testing for that.) If I didn't make the Mensa cut, I'm sure this will be the culprit. Here again, the type of testing they do is contrary to how intelligence is used in the real world.
There were far more questions than I expected where the task was to identify some attribute or relationship among things, and pick the item from the answer list that has the same (or opposite) attribute or relationship. I personally greatly dislike these questions on methodological grounds. The test authors have some particular "correct" relationship in mind, but it is often reasonable to see a different but real relationship, leading you to the "wrong" answer even though you've recognized something genuine. I don't know how (or if) this problem is addressed by test authors. Obviously I don't know how many questions I or others might have gotten wrong for this reason.
I thought the language/vocabulary portion was very easy. This was amusing because as a child I always scored better on the math portions of standardized tests than on the language portions. Despite getting a math minor in college and taking only the bare minimum english classes, my skills may have flipped. This is probably explained by the fact that I use language all day every day, but almost never do manual calculations anymore.
The final amusement was that I did better on the memory test than I thought I would. I often joke about my poor memory, and my friends know how I forget things, but it seems that when I'm actively trying to remember I can do a respectable job.
The Mensa test is no doubt very good at measuring what it's designed to measure. But I'm concerned about how well that correlates to the way intelligence is used in the real world.