An interesting article about some new ideas on autism and autistics who are using modern technology to speak out about common media perceptions that they are mentally retarded and locked in a world of their own. People challenging the commonly held believe that autism is disease and the people with it have damaged brains when in reality they may just have architecturally different brains.
Interesting stuff
Comments
Been thinking on this awhile. I hate IQ tests. IQ tests were invented by people (the US Immigration) who are good that them to distinguish people that they like from people they don't like. They've morphed since inception but still hold to same ridicuous checks and balances based around pattern theory, numeracy and literacy. When people generally mention IQ as a measure of this ethereal facet of human cognition, I ask them three things:
1. What is intelligence? And see if they manage to get two out of th 8 facets the vague and babbling OED manages to conjure.
2. Does doing the test make you more intelligent?
3. If I made a computer program that could do the test better than any human, would that make it more intelligent?
I normally get 'No' for 2 and 3. Although you can do better in the tests, the more you do. I remember my sister doing a lot of IQ tests at school and her IQ hitched up each time. I can write a computer program that will pass most IQ test, given data and time. I can also write a computer program that can statistically organise many different sources much faster and more accurately than a human can and without bias. I assure that these computer programs would not be considered 'intelligent' on any human standard.
In the end human intelligence is subjective and comparitive, you just know when one person is more intelligent than another. Someone else might think otherwise but given a large enough sample set, you get a human baised meaure of intelligence.
I mention this because:
Among all autistics, 75 percent are expected to score in the mentally retarded range on standard intelligence tests — that's an IQ of 70 or less.
What the fuck are we doing using a test like this anyway? It's a ridiculous measure of intelligence and these sorts of comparisons are pointless. Humanity likes to boil everything down to statistics and in cases where human ability is not easily measured, they should just not use them but decide on a more suitable means of understanding the mind. It's widely known that people with autism often display incredible feats of recollection or memory, is that retarded? By the IQ test, they certainly are because they can't see a relationship between "bat" and "cat" but they certainly have an unmeasurable gift.
As a scientist, a good maxim is that if the test doesn't yield the results you were looking for, either you are wrong or the test is. In this case, I think the test is wildly wrong. There may be a correlation between people that we believe are intelligent through our human biased judgement and those that get high marks in the test. There are also people who slip through the net and we don't know why... they appear to be amazingly intelligent but could not get high IQ scores. Surely the test is so generalised that it's not applicable here.
Good luck to her, she's played a blinder.
An IQ test is the same as any other test for the standard population. There are always going to be exceptions where its assumptions simply don't work. Another example is the Body Mass Index Test, which is your Weight (in kilos) divided by the Sqaure of your height in metres. For 95% of the population this works fine, and gives decent quantitative results... However for about 5% of the population who sit at the far ends of the bell curve it simply doesn't work.
The only difference between BMI and IQ is that BMI is based on the physical, and is therefore fairly easy to comprehend. IQ is an attempt to measure a non-physical attribute, and so picking out the 5% is a lot harder. Autistics are simply the super fit athletes who come out as morbidly obese when using "standard" testing...
You may as well hate Sudoku, crosswords (stupid fucking things), Nintendos rather popular Brain Training series as well. Every measuring system has its limitations...it doesn't make them non-useful for the vast majority.
IQ is considerably worse than BMI for a couple of reasons it is attempting to measure something for which we have no definition rather than acting as a composite score for things we can (like BMI is ). It does this by measuring things that we can measure that might possibly be related but in a haphazard fashion that leaves lots of room for bias.
It is a bit like trying to tell what colour an orange is by measuring it's diameter, weighing it, seeing what temperature it freezes at and then averaging those figures to give it an orangeness score. It is for the most part totally meaningless.
It also doesn't actually correlate with anything useful (about the only thing it does correlate to is ability to do well in school exams even then it's tenuous more to do with structural similarities and cultural factors than any link) where as BMI does. The only reason it gets used at all is historic reasons and the fact no one can think of what else to do.
I do hate sudoku it is one of the more pointless times sinks.
Ah, EMW got there first. Don't even begin to think they're useful for anything but serving itself.
Its a comparative measurement, and in most cases it does a fair job. its no better or worse than other measurements used to define people (and agai I will revert to BMI, which is nothing more than an arbitary benchmark...in and of itself it means nothing, but allows comparative measurement against others...the health risks they link to high BMI scores are utterly dependant on the individual ultimately).
A quick check on Wikipedia indicates IQ tests were initially developed to help identify children who need extra help at school, and it probably does that fairly well. By that means it is indeed a score of how well you can do school exams, and given that for most people who may be given an IQ test a formalised schooling is also an inevitability, maybe it has a purpose.
I know of at least 1 perso who got to a fairly old age (35) with a severe learning disability without it being diagnosed. A couple of tests later and he was given the help he needed. In that context how is a test bad?
Your point is valid in that in some situations the IQ test can be helpful. My problem with it is that the area in which it is useful is extremely limited, there is even a debate that rages about race being a factor. Example:
http://www.time.com/time/magazine/article/0,9171,902708,00.html
Given that such a wide variance can be introduced by things as trivial as mood and race I don't think it's the valid indicator it's portrayed as. At worst it's subjective and biased, at best it's useful for a narrow spectrum of society. BMI is a pile of shite half the time as well but there are other ways of determining if someone is healthy or fat that are more accurate and as a rule we use those. As previously mentioned there are other tests for measuring intelligence that maybe more valid and if there aren't I'd suggest there's certainly a case for developing one.
You're putting too much importance on IQ, Pete. Please answer my three questions listed in first post. I've found it's the best basis of any argument. If you think it's useful then you're probably just making the results match what you think intelligence to be. "We're expecting X, we get X! Hurrah, isn't our test great?". As a measure of intelligence, it's rubbish.
People who argue that IQ is really useful tend to be those that score well in it.
P.S. I am really well versed in this.
I'm putting no importance on it at all, however I don't think the test in and of itself is useless.
1. What is intelligence? Don't know, don't care, doesn't really matter (see below)
2. Does doing the test make you more intelligent?Within the parameters of the test, yes
3. If I made a computer program that could do the test better than any human, would that make it more intelligent? Yes, as ultimately the human brain is a computer. We're bound to replicate them one day. I have no belief in such things as souls.
For further clarification of no.1, "intelligence" is a catch-all term, however there are plenty of examples of where something has not been defined, and yet approximate measurements have been used. Gravity is the first one that comes to mind. Newton, without having the first idea what it was, came up with a measurement scale that was fairly good in most circumstances (of course it falls down in some areas, such as aub-atomics). He came up with an imperfect approximation, and yet it was successfully used to do little things like find planets, for hundreds of years, until we understood more, and were able to refine the calculations.
Yes, I do tend to do fairly well in IQ tests, though probably not as well as I used to. Haven't done one since uni (MENSA prelim exam). I wouldn't put any weight to my score, as I know I have weak points mentally (language being a key one among them, as well as spacial awareness). Assuming one day that a fully accurate way of determining intelligence is worked out (as opposed to the current approximation), I'm fairly sure that the results that come out will be comparable to the current suite of IQ tests.
Gravity is a repeatable test. IQ tests aren't. The tests Newton did were in regards to the acceleration of objects falling under the force created by large objects. It was all well defined and repeatable. It's been repeated in physics classrooms for years. The bounds of all of newton's laws are distinct.
Not for IQ. IQ is supposed to test for intelligence but it doesn't. The bounds it sets is 'human intelligence', so you do have to care and it does matter because you need to know what it is your testing for. The problem with that is that we don't actually know. If you did you, when you get results you can say "I tested for this and this is what I got". If you don't know what you're testing for, the test is shit. IQ doesn't know what it's testing for. It tests using a set of games that people think are a good judge of intelligence. It's based on human bias.
By getting better at the test, you're actually getting better at the game, which is not making you more intelligent - it's just making you better at the game. Is that improving your memory, recollection, ability to abstract, logical reasoning and all the other facets or is it just making you better at playing the game?
It's been shown that chimps can do better at some visualisation tasks than human, they concetrate better and have better spatial awareness. Are they more intelligent than humans? No, of course not. But why? Because they don't have language or numeric skills. What about abstraction, how is that tested? Classically missing from the test because it is something we assign to intelligent beings but we can't measure.
To take your last point, if they ever do make a measure for intelligence, they will see that IQ was woefully inadequate as a test for anything but the ability to play the game. Unlike subatomic physics, which respects Newton's laws for its continued correctness on large scale objects. A different set of laws may now be applicable for the very small but they are not refined laws, they are just set within their correct parameters.
And this does matter when the IQ test is used to classify and cluster people. It's inappropriate and misleading.
The issue here is that you're looking for absolutes, whereas I'm happy to see IQ tests as an approximation. I've never been grouped, and never come across somewhere where people are grouped via an IQ test. I've been grouped in school via test results for specific subjects, but not on general IQ.
Also, any measurement of humans will vary based on a number of parameters, and measurements of any living organism will by their very nature fluctuate, as you are not measuing a static system. You could measure my height in the morning, and probably find its about 176cm, whereas in the evening it will be 174-175cm (probably shorter on Sunday). My weight fluctuates by about 3-4 kilos throughout the day, so you cannot say with any accuracy how tall, or how heavy I am, as the tests are not repeatable. Does this make those measurements useless? I'm quite keen on improving my VO2Max, however this can only be properly measured on a treadmill, with electrodes attached and breathing tubes in place, so I use approximations to determine it. Are those approximations pointless? We can go further on the physical scale...what is fitness? How do you determine if one person is "more fit" than another? Do you therefore detest the concept of BMI tests, bleep tests, Body fat %, all competitive sports, as these are comparative tests, and not really "accurate"?
Ultimately I don't see intelligence as any different to other, more physical attributes. Both are trainable, and comparable. You can pick many different ways to compare them...IQ is simply one way.
You can compare mental tasks like the ability to do maths for example but that doesn't actually equate to intelligence. And they only have any meaning on an even playing field which IQ isn't.
Ie if you knew nothing of say algebra and were given a problem to solve you wouldn't be able to do it. On the other hand someone who had done basic maths would be able to do it. Does that make them more intelligent than you, no basic maths might be the limit of their ability. Also if you then learned algebra are you suddenly more intelligent, no you just know the rules it hasn't fundamentally altered your ability to do mathematical problems merely explained the framework. Without both of the people taking the test having the same training it is meaningless to compare the results.
There are also differences within that like some people are incredible at mental arithmetic but suck at the more abstract maths. So is someone who can multiply numbers in their head more or less intelligent than someone that can work out on paper complex multidimensional problems or are they two separate skill sets.
I think we may never get a good definition of intelligence it is an abstract concept with no grounding in reality. As to measuring brain function and measuring different cognitive tasks learning to duplicate it in software I think we'll get there, not any time soon much of the brain is still a mystery.
IQ doesn't measure intelligence. It measures your ability to do IQ tests. You can measure physical things, you can't measure intelligence. Thus, wildly different from physical attributes.
So how do you measure fitness? Thats a physical thing...
Perhaps that's another thing that's not well measured but I don't know about that. It's IQ that I think is useless.
Sorry, my answer should have been:
Define fitness.
Fitness is identical to IQ...it has many facets, and testing well for one does not correlate to testing well in another. It does not make the tests in themselves pointless. The classic one is BMI for weightlifters and professional athletes (they always score highly/obese, as their bodies are far more dense than the middle of the bell curve). There is, however, a standard set of tests for "fitness" (BMI, ECG, blood pressure, Body Fat %, Military tests etc), that allows most people to determine where they are in the grand scheme of things. The scales are based against average scores from sample sets, again identical to IQ tests (where on average the score is 100 I believe).
To use a visualisation of intelligence, its the same as the rest of your body, with muscles and joints all over the place. Some bits can be developed and efficient, some underused/injured and weaker. You'd need a test of each and every muscle in that body to get your fully accurate result, however in most cases thats complete overkill, and an approximation based on the more commonly used functions is enough for a decent ballpark figure. Likewise, to determine someones fitness you don't need to link them upto an ECG, measure their oxygen intake during load and stress, do a full chemical analysis of their blood levels and perspiration. Blood pressure and a medium-load heart rate reading is all you really need to get a good idea of where they are at. You can refine your results, but in the vast majority of cases you're not achieving much in doing that.
I knew you'd say that, here's my pre-empted reply.
We're approaching an argument on measurability and observability or worse, science as a religeon. In control theory (which applies here), there are things you can observe and measure, such as your height and weight. There are things you can't measure but you can observe and then there are things you can't observe or measure. You're always looking to be able to observe and measure things to tell what any system is doing.
However, because systems often have outputs that are linked to other things you can't observe, you can measure the things you can observe and draw conclusions on those things you can't observe. It's called an observability criterion.
That's ok for physical aspects. You can set up tests that directly measure certain aspects. How fast you can run 13 miles, for example. You'd set up a test to run 13 miles and time it using something suitable. Of course, to be statistically significant in a deterministic (which is different from non-repeatable) system you have to run 13 miles a few times and take an average. You know the test is measuring something definite - your time to run 13 miles. It might be slightly different as time goes on or on other factors but the test itself is always the same. 13 miles doesn't change, time doesn't change.
To then work out what's going on with the things you can't observe, you get experts to make the links between observable things and non-observable things.
That's the bit where IQ falls down. For a start, you don't actually know what it is you're trying to measure. You don't have a "time". You have a series of games you've made to get someone to do. You have no idea if that is actually measuring some facet of intelligence of whether it's just playing a game. As EMW says, being good at the game (or algebra) only proves you know the rules and are good at the game. The MASSIVE LEAP OF FAITH (science as a religeon) is that you believe the 'experts' can make direct ties between intelligence and the games you play. The links between the games and facet of intelligence you can't observe or measure.
The 13 mile run is just another game, and absolutely no detemination of fitness. Its a determination of how fast you can run 13 miles...identical to the IQ test. They are both arbitary tests with arbitary scoring, based on typical results.
I've already stated several times that I don't think IQ tests are all-encompassing and accurate...I reckon they are a good yardstick for most people. All tests are ultimately games of one type or another, however it doesn't make them pointless.
IQ is not even a remotely a good test.
The 13 mile run is one test, one of many. The matching-shapes part of the IQ test is one of many. I can see why you think they are identical.
BUT!
And this is where you're completely mistaken. It's reasonable to assume that running 13 miles quickly is one test that could be appropriate to measuring fitness...
BUT!
No-one in their right mind would think that the silly games that IQ tests make you do are actually appropriate to measuring intelligence. If you do, then you're going further down the road of "science as a form of faith" than I and EMW would ever go.
I know you famously never back down so if you want to quit it there, then please do. You can even have the last word but it has gone to show how trusting you are in science, particularly in an aspect which is as vague and unreliable as human intelligence.
I don't think a 13 mile run is a good test of fitness at all. There are people far fitter than me who I fully expect to beat on Sunday. Benjie (lad I train with) is younger, stronger, better over short distances, more flexible (did about 4 years of martial arts), and has been training for about 6-7 years. He's just not good at long distance running, and I have focused almost exclusively this year on endurance and speed. He's fitter, I'm more focused. I'll win though...
The same could be said for any physical fitness test you care to mention, however it does not make the tests in and of themselves pointless.
//added. As Rob's thrown in the towel, I'll do the final point now, and highlight that you are treating the concept of a half marathon in exactly the same way as an IQ test. Its an relative indication of fitness/intelligence, while at the same time being utterly inaccurate if you're interested in that area. In both cases you are measuring one thing to do a comparative on another. Neither are useless tests, while at the same time neither are definitive...
Of course, the only real way to measure intelligence is to roll 4D6, re-roll 1's and 2's, then take the highest 3 dice.
As always, no hard feelings.
Of course not, I just like arguing, and being belligerent. I can't give a toss about IQ test one was or 'tother :-D
Pete a 13mile run IS a good test of fitness.
the reason you can beat someone who is more agile and younger than you is that you have more determination than them, I believe it's defined as emotional or mental fitness.
Running 13 miles is a good test of being able to run 13 miles. A sprinter would really struggle over the same distance...however by no definition are they unfit. Swimmers wouldn't be good at it as they wouldn't have the resiliance to the impacts. Likewise I would struggle in any swimming event, as I'm a shockingly poor swimmer (its all upper body)...does that mean I'm unfit?
Its _a_ test of fitness, a decent yardstick, and performance within the bounds of that test is a reasonable marker of overal fitness compared to others, however its not all encompassing, and has several caviats attached...just like an IQ test really.
Fucking hell, skunty, what are you doing? Don't open pandora's box!
I think the definitions of what intelligence or fitness are need to be defined more presicely before any test can be designed.
If fitness is defined by being able to run long distances quickly, then a 13 mile run would be an accurate test for fitness.
If fitness however was defined at being able to produce a certain amount of energy for a given period of time then a measure of calories burned over time and results produced would be more accurate. Pete is right, using the measure of a 13 mile run would not be a good test to determine how fit a swimmer was and would not allow comparision between two athletes of differing fields.
Both could be defined as fit, both could also be be compared, but as yet the definition of fit has not being made to any greater than extent than a subjective qualatative statement.
Until that definition is made then a bleep test, a 13 mile run, a 100m sprint or a biathlon are equally good (and poor) measures of how fit a person is.
The same works for intelligence. Is higher intelligence measured by how quickly a series of mental arithmatic questions are solved an aspect of IQ tests? Why not? It is valid to say that this would show that someone possibly has quicker mechanisms in place to think through problems. It is also valid to question this as a measure of intelligence. A computer can perform calculations quicker than a person. Is it more intelligent? Most would say not.
Yes there are various people that have put forward what counts as intelligence, but these are multiple and differing.
You are both arguing that the tests are not accurate measures of either mental or physical fitness. I say that the tests are good at testing the aspects chosen to define this fitness. I question that the aspects chosen to define the fitness are the correct ones.
At best I say they are indicators and nothing more. They are qualatative not quantative. Until you accurately define what "fitness" is for either aspect I say that this will not change.
The problem with intelligence is that there is no where near a clear definition of what it is or how to measure any of it. The tests described do not test any part of the broadness of intelligence, they just test the act of doing the test.
That's where these two things depart.
But there is also no definition of "fitness". Yet many tests exist to a very small part of teh all-encompassing field, and within the scope of what they are testing they are valid. IQ tests are no different in that respect.
Consider Pandoras Box cheerfully open!
Hey Pandora, how goes it?
So effectively the *discussion* is not no longer about intelligence, or fitness but about the definition of the words "intelligence" and "fitness" which could well go on forever and a day. Lets see if we can find a test for them! ;)
My camp is that you don't need to fuly define a subject in able to be capable of doing comparative tests on aspects of it.
I'm goin to move away from the 13-mile run as a test, as its a shockingly bad and overly specific example by anyones standards, and instead I'll use something called VO2Max (effectively a determination of how much oxygen you can utilize in a period of time...the higher, the better. Its impacted by high intensity training) as its far less specific, but still reasonably well tied to performance levels. Many athletes however (field events, sprinters (who don't use areobic respiration for their events), weightlifters, in fact anything with the focus on anaerobic/explosive energy expenditure) will have a low score, so its not a _complete_ measure of fitness.
So, on one hand you have an IQ test, which test some aspects of (and I may be missing some stuff here, its a while since I did one) pattern recognition, word association, language, spacial awareness/shape puzzles. Practicing in these areas would increase your IQ. Ingesting a theasuarus would aid imensely, if my recollections are anything to go by (there were some words used I didn't know the meaning of). It doesn't cover language, abstration, and many other areas, so is not a _complete_ measure of intelligence.
So, we have tests which are clearly not a full measure of their test subject. They are, however, still useful at comparative testing, and for those people who just the relevant skills in their life. Just as not all aspects of fitness are important to everyone (I, for instance, don't give a shit about strength and explosive activity), I also don't need language skills of exceptional level, and average spacial awareness is good enough.
So, in summary, I'm saying you don't need the definitions, just the defnintion of what the test covers...
Ok being able to run 13 miles says you can run and are in pretty good physical condition as such you could use it to suggest that the person in question might be able to do other physical things quite well. The act of being able to run that far give you a certain amount of information about the individual as a whole that can be used to draw conclusions about the overall fitness of the person and other tests in related areas can give you a more complete picture since we know all the areas are linked to some degree. As you say there may be differences in some areas due to specialisation but we know that the parts are linked and can infer some global fitness level by looking at performance in a series of areas.
With the intelligence tests we can't say that. We don't know that there is any link between say spacial awareness and numerical ability or linguistics or memory or logic. There are some good reasons to think that they are not but we don't know nearly enough about how the brain works to say with any degree of confidence. We know a lot of these tasks are handled by discrete parts of the brain but we don't know how much the overall mental fitness of say your language centres tells you about your logic skills.
This is the key difference, with fitness each individual test can be linked to the others since we know that it is all part of a whole. We don't know that this is the case for the brain.
There is not yet any definitive data on how these disparate tests of different areas cognitive functions that are used to comprise a IQ test link together. Without that we are not actually getting comparative tests since we don't know that one aspect of the test has any bearing on the other. It is like saying a master mechanic has as much cooking skill as a master chef since he is good at fixing cars therefore he must be able to make a good cake as well since both skills involve putting a series of parts together.
No it isn't. It's like saying "This person scores highly on our standardised tests, which measure ability in some aspects of generalised reasoning ability, therefore we predict that, all other things being equal, he will perform above the average in similar reasoning tasks in the future." Which is what an actual psychologist would say. Let's not muddy the waters with an appeal to ridicule; this thread has been marred by enough bad arguments already.
No serious psychologist believes that IQ test results are conclusive in determining someone's "intelligence". In fact, pretty much every psychologist I've known has avoided using the word "intelligence" entirely, since, as has been pointed out, it's pretty much meaningless from a scientific perspective (and far too emotionally loaded a word to be used without prejudice.) What psychologists talk about IQ being a pointer to, is someone's generalised (or just general) reasoning ability. And even in that case, when talking about individuals, the data would almost never be taken in isolation, without context or other data sources, and absolutely never without taking into account the well-documented cultural and social biases in the test. Biases, incidentally, which were documented by psychologists who continued to consider the test useful.
And IQ tests, used carefully, are useful; test results are highly consistent in any one individual and variable between individuals, so it's definitely measuring something. It's possible that what's being measured is nothing more than someone's ability to do IQ tests, but that hypothesis is called into question by the fact that test results correlate strongly with academic success, professional success and financial success. This suggests that that whatever is being measured is fairly strongly related to something which is important to people's ability to achieve their goals in our environment. We don't know why that correlation is, but Occam's razor (talking of things that have to be used carefully) suggests that the correlation exits because the reasoning ability used to score well in IQ tests is the same as the reasoning ability we use to achieve our day to day goals. Even if that's not the case, the correlation remains, and makes the test useful, even if only as a proxy measure.
It's true that we really have very little idea of how the brain works, especially higher level functions, such as problem solving and reasoning, but that doesn't mean we can't measure test results and make statistical use of those results where they're shown to be useful and reliable. It's also almost certainly true that human cognitive ability is vastly too complex to be measured on a single axis, but it doesn't necessarily follow that measurements of a single axis therefore have no value, and, in fact, the IQ test is designed to attempt to cover a few different skills, in an attempt to improve it's utility. As Matt points out, this can be considered a weakness, since it could cause confusion, or mask variability within the test if the results were considered solely as a whole, without data about specific section results. This is often the case with those "test your own IQ!" books, or magazine articles, but much less common in actual studies.
What it comes down to is more or less what Pete's been saying from the off, which is that no, the test isn't perfect, but it doesn't have to be; it's not a binary decision; we don't have to say either that it's 100% correct and predictive or that it's useless and without value. The fact is that IQ tests do have value when used carefully and with due consideration. They don't give the whole picture, but that's OK, no-one expects them to; they're one of many tools in the psychologists' toolbox, which between them give us some insight into human cognition and cognitive ability.
(And yes, the toolbox of Psychology as a whole is woefully incomplete, but if we cast aside the tools we do have because of that, we'll never make any progress at completing it at all.)
Quoting aggroboy: "Let's not muddy the waters with an appeal to ridicule; this thread has been marred by enough bad arguments already."
Not blaming anyone, Pete.
Aggro,
"The fact is that IQ tests do have value when used carefully and with due consideration. "
I think the real issue is that the reasonable consideration i've quoted is not the general opinion, a straw poll of the public would say that an IQ test is a test of intelligence, not a test of logical thinking that excludes any measure of abstract thought.
But as far as I can tell no-one on this thread is arguing that it is a good general test for intelligence, but that they are useful for psychological work (when used carefully) or as a very rough yardstick.
Assuming this is the case, who is arguing with who now? I don't dispute that the general public may well think of this as a good test of intelligence, but no-one here seems to be convinced of that...
Indeed, if your point is just that we need to better educate the public about the use of such tests, then I have no disagreement at all.
I don't think this counts as an arguement anymore! More of a sparring session where the combatants trade verbal blows and Pete runs about shifting the goalposts. ;)
the key part of that quote is "all other things being equal" which is the problem they are not
This case clearly shows if you use IQ tests you can only use them to compare people with the same brain types and the results are not comparable with other groups and will have different meanings depending on the group. If used it to compare this woman or even say a dyslexic or someone else with a sufficiently different brain structure the results will be either skewed or totally inaccurate. On a standard IQ test this woman would score as severely mentally handicapped with little or no reasoning ability but it is clear from observation that isn't the case.
This is the extreme example but even for the less extreme cases it would mean you could not rely on an IQ test to give anything approaching directly comparable results.
We seem to be having several arguments at once, the argument about the usefulness of IQ tests, the argument about whether IQ is conceptually similar to Fitness, and the argument about whether IQ test measure intelligence.
"What psychologists talk about IQ being a pointer to, is someone's generalised (or just general) reasoning ability."
NO NO NO NO! Then they are wrong. To match coloured shapes, you need a perception of shapes, not just reasoning. Match similar words, you need language, not just reasoning. To match numbers in a row, you need numeracy, not just reasoning. They don't test reasoning, we don't know what the brain is doing. You can't extract a variable of 'reasoning' from the system by getting the person to sit lots of tests as the variables are blatantly interdependent! We just assume that it is reasoning. You say that utility is improved if it covers more than one area. You can only improve utility if you actually know the effect of the test. You don't. All you know is the test itself.
How do you know that the test is actually testing what it claims to?
YOU DON'T.
YOU DON'T.
YOU DON'T.
YOU DON'T.
You can make up statistic correlations with people doing well in society and that might be nice for the 95th percentile but for the outliers, like those in the original article, where IQ is quoted so quickly as a measure, it means nothing. The test is so incredibly biased that it will never be a true test of any but the test itself. If you want to assume that it's testing higher functions, then fine. That's science as a faith and is INSANE. You use the phrase "Suggests" rather a lot. How many hierarchical assumptions are you going to make before the who tree falls over?
My biggest problem is that they're not used carefully or as a yardstick. It's now entered the psyche that they are ok and that it's tried and tested because it's been around for a long time. They are shit and should be stopped. Data is taken in isolation. ALL THE TIME.
All the studies I've read have been vague. You do have to start somewhere in science but unfortunately, this particular tool in the "psychologists toolbox" is so flawed that going near it is dangerous. I have this point to make about a lot of psychology. Having read a fair number of papers, it appear that assumptions are generated from what seems reasonable, a bit like trying to tell what's going on in the PC by what lights flash on the front of the box.
[Keepers++]
All other things are never equal, whatever you're measuring. A major part of the scientific endeavour is the attempt to reduce the impact of uncontrolled variables on tests. If you want to throw IQ tests away for running that risk, then you need to ask some serious questions about the rest of science.
This case tells us very little about IQ tests, beyond what I've already said; that the data can't be taken in isolation. It's true that Autistics score disproportionately low on IQ tests, and that, for them, it's not representative. This isn't really news. I know the article makes it out to be earth shattering, but I was taught it at University in the mid-nineties, so somebody has obviously known for longer (not about IQ specifically, but that it was a scale of structural differences with a range of cognitive effects, rather than specific mental defect to be treated.) The point is that this doesn't invalidate the IQ test; it doesn't mean the test is somehow unfair; the reason autistics score low is that IQ tests happen to measure things that they really are bad at. It's the same for dyslexics. As long as you're aware of that context, the test remains useful (and to a lesser degree, even if you're not.)
In fact the very fact that certain conditions skew the results of tests like IQ can be useful in detecting people with those conditions. In the case of autism, detection isn't usually a problem, but having a baseline method of rating generic reasoning allows us to ask interesting and potentially fruitful questions like "why is this test which is representative of most people, so woefully unrepresentative of people with this condition?"
It's true that autistics have traditionally had a very bad time of it, but I'd suggest that it's their inability to express themselves and their thoughts to others that had led to them being branded retards, rather than their low IQ scores.
This is the extreme example but even for the less extreme cases it would mean you could not rely on an IQ test to give anything approaching directly comparable results.
Not necessarily. The very definition of extreme is 'beyond normal bounds.' We expect extreme cases to deviate from normal models, that's why we call them extreme, but there is no reason at all to assume that that invalidates those models in the normal case. The question is where that line is drawn. As I've said before, statistically speaking, the IQ test is quite useful in the normal case, when we approach it with the care it requires.
ok - that was a response to Matt. On to Rob...
To match coloured shapes, you need a perception of shapes, not just reasoning. Match similar words, you need language, not just reasoning. To match numbers in a row, you need numeracy, not just reasoning. They don't test reasoning, we don't know what the brain is doing.
What if your definition of reasoning includes those things? You don't need to know precisely what the brain is doing in order to measure it's ability to solve certain categories of problem, which is all the IQ test claims to do.
You can make up statistic correlations with people doing well in society and that might be nice for the 95th percentile but for the outliers, like those in the original article, where IQ is quoted so quickly as a measure, it means nothing.
No-one's 'making up' statistical correlations; they're right there in the data. It's true, that there will always be outliers, for whom the correlation doesn't apply, and that we need to guard against making unfounded assumptions, and I never disputed that.
That's science as a faith and is INSANE. You use the phrase "Suggests" rather a lot. How many hierarchical assumptions are you going to make before the who tree falls over?
Woah - careful with the ad hominems there, mate: you'll have someone's eye out.
I use the word "suggests" because, in science, that's all evidence can ever do.
... and a whole load of assertions of bias, (even dangerous bias,) that I think were covered in my first post (ie, who do you think detected that bias, and if they detected it, don't you think they might be able to correct for it?)
"A major part of the scientific endeavour is the attempt to reduce the impact of uncontrolled variables on tests"
indeed but the problem here is we don't even know what a lot of the variables are. There is no definition there is no theory so we have to rely on statistical data which is variable in its conclusions most of which have nothing to do with intellect and more to do with sociological correlations. The suggestions are that this has some pretty big error bars on IQ results and fairly high degree of variability even just when testing a single individual repeatedly. Some of this is the more test you do the better you get at them but there are other factors involved that make things tough to filter out all of the inequalities and cope with the multiple layers of bias.
It is worth remembering that IQ started as a test to determine if children were retarded in the early 1900's a time where they knew even less than we do now about the brain worked. The original tests included things like the tester dropping a ruler and the testee trying to catch it and other similar more reaction time and physically orientated tests. All the current numbers are calibrated in general to fit with the results those original tests produced (since each subsequent test was designed to produce roughly the same numbers as the one before them and so on back in time).
"You don't need to know precisely what the brain is doing in order to measure it's ability to solve certain categories of problem, which is all the IQ test claims to do."
I agreee that you don't need to know what the brain is doing but by doing problems, you're not solving "Categories" you're solving JUST THOSE PROBLEMS. Furthermore, the IQ Test claims to do much more - it's an intelligence quotient. If you mean IQ Test in the hands of Psychology Professors then fair play. You still don't know what areas of cognitive intelligence the test actually excites. You have no idea. The professors don't. You don't. I don't. We don't. There are 'n' variables involved here, where n is a very, very large number indeed. To properly reduce the effects of the variables, you'd need to invent an almost infinite number of tests, where the effect of each test is statistically insignificant. Even then, you still don't know what part of intelligence it is your testing for.
Do you really think that these simple pattern matching tests are a good measure of any part of a human's intelligent cognition?
No-one's 'making up' statistical correlations; they're right there in the data.
Right. There. In. The. Data. A correlation is made up. You take two sets of data and you draw conclusions. You choose the data. The data isn't generated automatically. I'm not going to let you have that. In general, I do agree that people who appear to have high IQs do tend be better at IQ tests but that's not what intelligence is about and turns the argument so vague to be pointless. As useless as the test.
detected that bias, and if they detected it, don't you think they might be able to correct for it?
No. Because you need to know what the bias is before you detect for it and there isn't the funding in the world to create a test that is free of bias. Furthermore, how do you correct for "Cultural Upbrining" in a test and still test for memory retrieval? You can't. You can try and create a test without words or free of culture but if someone who lives in a place where red is not a popular colour, your coloured shapes test will put them at a disadvantage. What about removing the bias of doing tests? Actually sitting and performing them? How do you remove those bias? You can't.
Which brings me neatly back to machine learning, which operates without bias and often produces results that are unpleasing to humanity so are modified by researchers pressed into keeping their paper count up for the next RAE.
OK - I keep hearing the same sorts of things coming back at me, which makes me think that either I'm not getting my point across, or that I missing everyone else's.
So let me be explicit.
I am emphatically not saying any of the following things:
I explicitly concede the following points:
What I am saying is this:
Those two facts alone make the tests useful, in my mind, for the following two sorts of things (and it must always be in the hands of professional psychologists, who are well aware of the caveats above):
Helping to broadly determine the distribution of a sub-population within a larger population. Particularly, measuring how that distribution can change over time in reaction to changing variables. It's not always a great measure, but, for example, in western schools, where it is a good predictor of academic success, it can be useful in determining or predicting the effects of new teaching policy, either instead of waiting for students to fail their exams, or alongside exam pass-rates as a check-measure. Likewise, a western workplace might use IQ as a proxy measurement for actual workplace performance, to help determine how changes to the workplace affect workers' ability to focus and apply cognitive skills.
and
Helping us frame questions and gather (albeit likely flawed) data which could potentially advance our knowledge of human cognition; after all, it's always easier to ask what's wrong with our current thinking than it is to start from scratch. At some point that knowledge of human cognition will likely allow us to formulate a more comprehensive test (or tests) of it, but we're not there yet.
An example I gave before was "why does IQ correlate well for most Caucasians, but not autistics?" The fact is that the answer to that question will probably be another nail in the coffin of the IQ test as a measure of general cognition. But if we get that answer out of it and it tells us something about what it is to be autistic, then it's been useful, despite being incorrect.
I think Matt and I were both driving towards this last point from opposite sides of the the debate, while discussing uncontrolled variables. Matt's point was (I think,) that there are a tonne of uncontrolled and even unknown variables when it comes to measures of cognitive performance, so we can't trust what they tell us. My point (which on rereading, I totally failed to make) is that, by analysing the data from a test (or set of tests) we know to be incomplete we stand a chance of identifying those variables and taking account of them in the next iteration of our test. Or, of course, in the test that supplants our current one.
No. Because you need to know what the bias is before you detect for it and there isn't the funding in the world to create a test that is free of bias ... How do you remove those bias? You can't.
Spot on; I miswrote what I mean to say, which was take account of those biases. Obviously, some known biases can be corrected for statistically, others cannot. In the places where no correction is possible, the only option is to not use the test in question. That's often, but by no means always, the case with IQ tests. Unknown biases remain just that; unknown. Hopefully rigourous experimental design and appropriate use of controls will detect them. It would be a bit weird to throw away an entire test on an unsubstantiated assumption that they're there.
I now agree with you. Hurrah for bullet points! Damn you, Aggro.
Come on Pete, say something objectionable.
Poppycock!
I'm not actually disagreeing I just like the word, despite it's American origins it sounds so British :)
anyway I think Will is probably right that we are arguing the same point from two different directions and neither of us was quite making point that clearly.
As long we come to the conclusion that I am right and Pete isn't wrong, we're good to go.