Jump to content

Talk:Intelligence quotient

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by WeijiBaikeBianji (talk | contribs) at 03:56, 27 May 2016 (The scholarly consensus is that genetics is more important than environment in determining intellectual standing: r do more reading, to learn more). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

I will be adding numerous references and bibliography entries.

Last year I began a major revision of a working paper project (begun in 2006, based on shorter research notes I began compiling as early as 1993) largely on this Wikipedia topic. As the talk page templates note, "This is a controversial topic that may be under dispute." As a courtesy to the editors who have long been here, I will note that I will begin adding the dozens of books and articles I have at hand for my non-Wikipedia project (a literature review for popular audiences interested in the primary source literature on IQ testing) to this Wikipedia article. At first I will add books and articles from various points of view to the bibliography. Then I will add more references to verify the statements that have already long stood in the article. (I hope to add specific page numbers to both the references I add and the existing references that I am able to look up here.) At some length, I expect to expand sections with additional facts, perhaps add a few subsections, and from time to time do substantive edits under the NPOV principle, as the sources report various points of view. Thanks to all of you who have already worked on this very detailed article. I am lucky to have access to a very comprehensive academic library at which I have circulating privileges, so I am delighted to add some V and NPOV to various Wikipedia projects. WeijiBaikeBianji (talk) 03:42, 1 June 2010 (UTC)[reply]

That sounds like a great job. I'm looking forward to reading your additions. Good luck to you! :) Lova Falk talk 08:19, 1 June 2010 (UTC)[reply]
Here is an update on that project. You may find it helpful while reading or editing articles to look at a bibliography of Intelligence Citations, posted for the use of all Wikipedians who have occasion to edit articles on human intelligence and related issues. I happen to have circulating access to a huge academic research library at a university with an active research program in these issues (and to another library that is one of the ten largest public library systems in the United States) and have been researching these issues since 1989. You are welcome to use these citations for your own research. You can help other Wikipedians by suggesting new sources through comments on that page. It will be extremely helpful for articles on human intelligence to edit them according to the Wikipedia standards for reliable sources for medicine-related articles, as it is important to get these issues as well verified as possible. -- WeijiBaikeBianji (talk) 17:22, 30 June 2010 (UTC)[reply]
I have begun substantive edits to this article based on sources that other Wikipedians can check in the Intelligence Citations list. All of you are encouraged to suggest new sources for that list, which will be useful for editing quite a few articles on Wikipedia. -- WeijiBaikeBianji (talk) 15:48, 28 July 2010 (UTC)[reply]

Standard deviation calculation

The text indicates that the test is so designed that the standard deviation is fifteen points. While this may have been true at one point, it appears no longer to be true.

First, American Mensa (and, one hopes, they know at least a little bit about IQs) has chosen an IQ of 142 as the cutoff, which, per their intent, limits their membership to the top two percent of the population. Read another way, their assertion is clearly that persons possessing an IQ >= 142 represent 2% of the population. Citing the Q function table in an appendix to J. Melsa and D. Cohn, Decision and Estimation Theory (New York: McGraw-Hill, 1978), the closest figure to 2% (viz., 0.0202) corresponds to an argument of 2.05. In other words, Mensa asserts that 142 represents an IQ 2.05 standard deviations above the mean, i.e., the standard deviation is 42/2.05, or approximately 20.5.

Second, citing the standards maintained by a similar organization, known as the Six Sigma Society, we run into a rarefaction problem. Specifically, the value of Q(6) is less than 1.82 times 10 to the minus 9th power (sciz.,, most charts run only to Q(5.9), and its value is 1.82x10E-9, so Q(6) is necessarily slightly smaller). Given that the earth's population is approximately seven billion, this would allow for only three or four members—planet-wide—of the Six Sigma Society. Does the assertion that only three persons on the planet possess an IQ of 190 or greater even remotely make sense? If, citing the preceding argument, we adopt the figure of 20.5 for the standard deviation—in fact, let us use 20 for simplicity's sake—then we conclude that only three persons on the planet possess an IQ of 220 or higher, which is far more reasonable, if considering no more "reliable" source than a decades-old Guinness Book of World Records, which cited Kim Ung-Young of Seoul, who possessed an IQ of 210, as "the smartest person in the world." 73.49.1.133 (talk) 21:56, 5 June 2015 (UTC)[reply]

I'll check the article statements, which have long been based on reliable sources, if I remember correctly. -- WeijiBaikeBianji (talk, how I edit) 04:10, 6 June 2015 (UTC)[reply]
The standard deviation in IQ points is something that test makers can freely choose. Many tests use 15 points these days. The American Mensa accepts many different tests. The qualifying score is usually 130 or 132 on individually administered tests like the WAIS and the SB[1], that is, 2 SDs above the mean.--Victor Chmara (talk) 20:26, 6 June 2015 (UTC)[reply]

Thanks! When I joined Mensa many, many years ago, the flat statement on their Web site was that a minimum score of 142 was required; it said elsewhere that this corresponded to the cutoff of the 98th percentile, sciz., the point below which 98 percent of the population would score. It's interesting to see that they now cite 130 or 132 in many different contexts—as well as that, on a scale called Cattell, the minimum is 148! I imagine that corresponds to a standard deviation of 22.5, which is a peculiar number indeed.

I guess it's quite a separate question why so many people in Mensa were so very disturbed. Not only that, but you occasionally met someone who was obviously extremely intelligent but who—despite having two Ph.D.s, both of them earned—was employed as a stockroom clerk. In one extreme case, a supremely gifted gentleman was able within one evening to create a short story where he won a SCRABBLE game against the Devil with the score 1075-1074 (with all details explicitly specified) but was literally unable to respond to "Good evening" or "How 'bout those 'Skins [Washington Redskins, familiarly]?" at a party. These folks must've occurred as genetic accidents to families that had no idea of how to deal with them, making them end up somewhat like the (supposedly humorous) character Brick on The Middle. Admittedly, Brick doesn't have much of an intellect at all—instead having mastery of some mellifluous turns of phrase because of his perpetual nose-in-book state—but he does have a need for rather intensive psychiatry!

73.49.1.133 (talk) 20:21, 10 June 2015 (UTC)[reply]

Genetics and environment section - heritability

There is apparently a large amount of variation across the several wiki pages concerning human intelligence. I'm in the process of trying to bring them all in line with the data and with each other.

Right now, I'm only focusing on the Genetics and environment section of this page, and referring specifically to the issue of the heritability of intelligence. I wrote a bit on this subject in the Heritability of IQ talk page, and it's relevant here, so I will use some parts from what I wrote there.

First of all, there are several misinterpretations of data which misrepresent the overall scientific consensus and the reality of the issue as evidenced by a large number of studies, all of which have data that is broadly in agreement. Many of the misinterpretations appear to be a purposeful attempt to insert a biased view of the data into the page. For example, the first obvious error is the sentence "Heritability measures the proportion of 'variation' in a trait that can be attributed to genes, and not the proportion of a trait caused by genes." The first and second half of that sentence directly contradict. Heritability does measure the proportion of a trait caused by genes, but heritability applies to populations. And in populations there are variations in phenotypic traits between individuals... heritability measures how much of that variation is caused by genetics or, in other words, the proportion of a trait caused by genes.

The second obvious error is the most damaging one. It's the sentence: "The general figure for heritability of IQ is about 0.5 across multiple studies in varying populations". This sentence conflicts with the overwhelming majority of studies and data on the subject. The .5 number is considered an absolute lower-bound for the heritability of IQ (see: http://www.ncbi.nlm.nih.gov/pubmed/21826061 - note that "fluid-intelligence" is what is tested during IQ tests). Since the citation is a book (Behavioral Genetics - https://books.google.com/books?id=OytMMAEACAAJ), which can't be viewed anywhere online, I have no way to check their sources to see if this is a misquotation and the book simply mentioned .50 as a lower-bound or not. However, after researching the data in the 'Behavior Genetics' book that the citation referred to, I found it comes from studies done on children. The heritability of IQ changes dramatically as a person ages from childhood to adulthood, so any studies on the heritability of IQ which were done solely on children should be labeled as such and NOT included in data which could be assumed to be the final adult heritability number, since they will be significantly lower.

Several meta-studies and authoritative sources conflict with the .5 number for adult intelligence heritability as well. For example, "Mainstream Science on Intelligence", which was a statement paper signed by 52 university professors specializing in intelligence and related fields, and has this to say on the heritability of IQ: "Heritability estimates range from 0.4 to 0.8 ... indicating genetics plays a bigger role than environment in creating IQ differences"

But far better than that is "Intelligence: Knowns and Unknowns". In 1995, the APA published what was meant to be a non-political "authoritative report" on the issue of intelligence. Basically it was to be a statement on the consensus of the field of professional psychology on the issue of intelligence, straight from it's leading members. Here's what it says on the heritability of IQ (which may explain why .5 is mistakenly used sometimes):

"Across the ordinary range of environments in modern Western societies, a sizable part of the variation in intelligence test scores is associated with genetic differences among individuals. Quantitative estimates vary from one study to another, because many are based on small or selective samples. If one simply combines all available correlations in a single analysis, the heritability (h 2) works out to about .50 and the between-family variance (c 2) to about .25 (e.g., Chipuer, Rovine, & Plomin, 1990; Loehlin, 1989). These overall figures are misleading, however, because most of the relevant studies have been done with children. We now know that the heritability of IQ changes with age: h 2 goes up and c 2 goes down from infancy to adulthood (McCartney, Harris, & Bernieri, 1990; McGue, Bouchard, Iacono, & Lykken, 1993). In childhood h 2 and c 2 for IQ are of the order of .45 and .35; by late adolescence h 2 is around .75 and c 2 is quite low (zero in some studies). Substantial environmental variance remains, but it primarily reflects within-family rather than between-family differences.

These adult parameter estimates are based on a number of independent studies. The correlation between MZ twins reared apart, which directly estimates h2, ranged from .68 to .78 in five studies involving adult samples from Europe and the United States (McGue et al., 1993). The correlation between unrelated children reared together in adoptive families, which directly estimates c2, was approximately zero for adolescents in two adoption studies (Loehlin, Horn, & Willerman, 1989; Scarr & Weinberg, 1978) and. 19 in a third (the Minnesota transracial adoption study: Scarr, Weinberg, & Waldman, 1993). "

Another problematic sentence is "Debate is ongoing[weasel words] about whether these heritability estimates are too high, owing to inadequate consideration of various factors—such as the environment being relatively more important in families with low socioeconomic status, or the effect of the maternal (fetal) environment.

In fact, debate is ongoing in every aspect of every subject in the world. That is what we call science. Nevertheless, that sentence needs to be removed since it implies, inaccurately, that 1)there is some coherent trend toward the belief that current heritability measurements are too high; and that 2)there are proposed hypothesis for why they're "too high".

The entire Individual genes section is poorly written and reflects an obvious bias. It appears to imply that genes have little effect on intelligence. But intelligence, like height, is a polygenic trait. It's influenced by a large number of genes, so individual genes invariably will have a very small effect on their own. This by no means suggests anything about the amount of influence genes as a whole have on intelligence. Bzzzing (talk) 19:29, 7 December 2015 (UTC)[reply]

Neuroimaging - Issues with the newly added section

The Hampshire and Owen paper "Fractionating Human Intelligence" that is the main subject of this new 'Neuroimaging' section is not thought of highly in the field. I will explain why and will provide sources.

The paper's main claim is that g or general intelligence is not a valid concept. First of all, there are a massive number of papers and studies going back almost 100 years in support of the concept of g or general intelligence. It's been studied in every possible way and with every available technology. It is, without question, the single most researched topic in psychology. And the weight of that research confirms the claim that g is an objectively real variable and the best existing measure of human cognitive abilities. (https://www.udel.edu/educ/gottfredson/reprints/1997mainstream.pdf) (http://differentialclub.wdfiles.com/local--files/definitions-structure-and-measurement/Intelligence-Knowns-and-unknowns.pdf)

As for the paper itself... it claims to have a huge sample size, based on results of an online "IQ test" made up of 12 different cognitive tasks. They received 45,000 usable responses, which would be a large sample size - however, the central claims of the study rely exclusively on fMRI tests used to measure brain activity caused by each of the 12 tasks... and there were only 16 volunteers for the fMRI section of the study, a very small sample size indeed.

The paper shows evidence that different areas of the brain are used for different tasks. The authors believe that since human intelligence is formed from multiple cognitive components (different brain areas), a higher-order intelligence factor therefore does not exist. That's the jist of the paper. A more detailed description is that there are essentially two or three major areas of the brain involved in what we call "general intelligence" (memory, logic, and possibly verbal networks), and that many of the 12 tasks required each of these 2 or 3 networks in varying degrees. The authors argue that because of that, a single higher-order intelligence factor does not really exist. If you're thinking how absurd of an argument that is... you're not alone. As I said, this paper has been commented on negatively by a number of people in the field, including Richard J. Haier, Sherif Karama, Roberto Colom, Rex Eugene Jung, Wendy Johnson, Michael C. Ashton, Kibeom Lee, and Beth A. Visser, among others.

Quotes from other scientists about the paper:

  • "There’s a sense, though, in which it doesn’t matter. If all tasks require both memory and reasoning (and all did in this study), then the sum of someone’s memory and reasoning ability is in effect a g score, because it will affect performance in all tasks. If so, it’s academic whether this g score is ‘really’ monolithic or not. Imagine that in order to be good at basketball, you need to be both tall, and agile. In that case you could measure someone’s basketball aptitude, even though it’s not really one single ‘thing’…"(http://blogs.discovermagazine.com/neuroskeptic/2012/12/24/how-intelligent-is-iq/#.VnWcXbYrKXY)
  • But the best response, IMO, was given by Ashton, Lee, and Visser. They dismantle the paper piece by piece in their subsequent "Higher-order g versus blended variable models of mental ability: Comment on Hampshire et al": "Here we use CFA to compare a higher-order g model with a task mixing or blended variable model in relation to the data of Hampshire et al., and we find that the higher-order g model provides a much closer fit to the data. Following Thurstone (1938), we suggest that it is conceptually implausible that every task is influenced by every factor of mental ability. We also suggest that the non-existence of g would be demonstrated by finding mutually orthogonal markers of those factors; however, the data of Hampshire et al. and other mental ability datasets suggest that this cannot be achieved." (http://www.sciencedirect.com/science/article/pii/S0191886913012804)
  • And then again by Ashton, Lee, and Visser in "Orthogonal factors of mental ability? A response to Hampshire et al" where they say: "We explain that Hampshire, Parkin, Highfield, and Owen (2014b) have not demonstrated any orthogonality of brain network capacities and that a model of mental ability structure must make testable predictions about individual differences in task performance. Evidence thus far indicates that g-based models account for intercorrelations among mental ability tasks better than do task mixing or blended variable models."(https://www.researchgate.net/publication/260029415_Orthogonal_factors_of_mental_ability_A_response_to_Hampshire_et_al)

However, the most important point to realize is that it doesn't matter how many brain networks or "neural systems" are involved in general intelligence, and in fact, we wouldn't expect general intelligence to be centered in just one area of the brain. The current most widely accepted models of consciousness, for instance Stanislas Dehaene's "global neuronal workspace", suggests that consciousness is a distributed system, made up of many different brain networks or "neural systems" that come together to form a single concept. Just like general intelligence. Here's a related quote from Dehaene's paper: "Because GNW (global neuronal workspace) neurons are broadly distributed, there is no single brain center where conscious information is gathered and dispatched but rather a brain-scale process of conscious synthesis achieved when multiple processors converge to a coherent metastable state." (http://www.cs.helsinki.fi/u/ahyvarin/teaching/niseminar4/Dehaene_GlobalNeuronalWorkspace.pdf)

Lastly, a minor quibble: the article says: "They postulated that instead of using a single concept of G or intelligent quotient..." The general intelligence factor or 'g' is always printed as a lower case italic.

Because of the above - the fact that the only paper mentioned in the Neuroimaging section of this article is a relatively fringe paper that not only has a claim that is contrary to the general consensus in the field, but also has a number of technical and logical flaws that have been pointed out by other researchers in the field - I have deleted that section. I hope I made a good argument for my edit, but if not, please let me know in this talk page. thanks.

Bzzzing (talk) 19:57, 19 December 2015 (UTC)[reply]

Race versus ethnicity

I have undone the recent edit changing "Race and Intelligence" to "Ethnicity and Intelligence" since it is incorrect for the following reasons: 1)The term "race" is generally associated with biology, while "ethnicity" is associated with culture. "Races" are genetically distinct populations within the same species, while groups of different "ethnicity" may or may not be genetically distinct, but differ only in some cultural aspect, such as language, religion, customs, etc.; and 2)there is already an entire article on wikipedia called Race and intelligence.

There is a movement among some groups to try to avoid the term "race" when referring to humans, or to downplay it as a "socially constructed" term, and I suspect that is why the edit was made. All terms are socially constructed, but that doesn't mean the term is any less useful or that what it refers to is any less real. The term "race" is very useful in human biology, and conveys very real, objectively measurable information. Yes, humans do exist on a biological continuum, but that continuum is not perfectly smooth and there are "bulges" on it. Those "bulges" are what we call "races".

The edit also added a link to Nations and intelligence, but I have left that for now, although I think it probably should be removed as well, since that subject hasn't had much good research done yet. Does anyone have any strong opinions on either leaving it or taking it out? thanks

Bzzzing (talk) 16:54, 21 December 2015 (UTC)[reply]


Neuroimaging - Issues with the newly added section RESPONSE

User:Bzzzing

I won't undo changes you made to the section that I had previously added... since you do have a point that it is outweighed by the current consensus. However I will respond for the sake of an argument regarding the topic of IQ since I am very much interested in intelligence and cognition.

Despite how long researchers have studied IQ or G whether 100 or 1000 years, it is still a flawed concept. It doesn't take much to notice that, unless of course one is convinced through appeals to authority as many have been. Major argument is that IQ represents a pure measure of visual-spatial ability and reasoning, nothing more. That is solely evident by the fact that higher-order IQ tests such as Raven's Progressive matrices relies it's test items purely based on solving visual-spatial shapes and mental rotating puzzles(and nothing else). So does Catell's Culture Fair III and even WAIS. For example, WAIS items and their measurement of mental abilities:

WAIS III IQ test

Object Assembly section - spatial and mechanical items

Picture arrangement - visual-spatial task

Picture completion - visual-spatial task

Block Design- Visual-Spatial task

Letter-number sequencing - requires visuospatial working memory therefore not a pure measure of verbal ability http://www.ncbi.nlm.nih.gov/pubmed/10868248

Arithmetic - requires mental rotation therefore not a pure measure of verbal ability. http://www.sciencedirect.com/science/article/pii/S0001691813001200


Cancellation - visual selective attention, visual-motor ability


Information questions on the WAIS is based on degree of general information acquired from culture ( general knowledge gained from experience, outside reading and interest and therefore not a measure of cognitive ability

Vocabulary questions are also based on past experience and environmental exposure. For the picture vocabulary questions, obviously it requires visual recongition. — Preceding unsigned comment added by Doe1994 (talkcontribs) 18:44, 22 December 2015 (UTC)[reply]


So no, "G" is simply visual-spatial processing power which relies on the fronto-parietal network in the brain, and all IQ tests also correlate with each other because they measure this single cognitive mental ability. I have talked to Richard Lynn, Wendy Williams and other psychologists about this and they have no defence against my arguments. Richard Lynn's argument was "spatial ability" is important in intelligence while Wendy Williams gave nothing in response. Their unwillingness to criticize the 100 year models of intelligence is blinded by their orthodoxy and creationist like mindstate-loyalty to their field.

The Fractonizing intelligence study does have a big point which is that different cognitive abilities rely on different cortical networks in the brain. For example, there is a separate network for processing verbal, auditory, communicative and language-based information such as the temporal cortex versus brain networks that process visual-spatial and numerical information such as the parietal cortex. IQ tests also measure zero verbal cognitive abilities and it does a poor job in measuring short term memory and working memory as the Fraction study pointed out.

I look further to this discussion with you.

Doe1994 (talk) 02:54, 22 December 2015 (UTC)User:Doe1994Doe1994 (talk) 02:54, 22 December 2015 (UTC)[reply]


IQ correlates strongly with every type of cognitive measurement ever devised. It also correlates strongly with academic success and future life success. The fact of the matter is that IQ measures something far more than the narrow "visual-spatial" aspect you claim... it measures one's ability to learn. If you know of any type of measurement at all which is better than an IQ test at determining one's ability to learn, I am curious to hear about it. Bzzzing (talk) 22:36, 22 December 2015 (UTC)[reply]

Also... I'm not sure you read my entire post explaining my reasons for removing the study. I answered many of points you brought up in it. I also wrote about how "g factor" is always printed as a lower case italic, not uppercase as in G. The fact that you keep typing it as "G" leads me to believe you didn't read my whole reasoning above. Bzzzing (talk) 22:42, 22 December 2015 (UTC)[reply]


User:Bzzzing


Thanks for the response. I think you said it yourself, IQ merely correlates with success, and that implies that it's partly measuring something that is "causal" or related to success . Which goes back to my original argument that IQ tests measures visual-spatial ability which is a part of this "causal" factor of intelligence and therefore allows IQ tests to predict success without even measuring the entire spectrum of intelligence. Or in another analogy, it's like measuring muscle strength to predict future performance in sports without actually measuring the entire range of athletic abilities.

I strongly disagree that IQ tests measures one's ability to learn because it's atually based on the concept of fluid intelligence not crystallized, and therefore it measures one's natural aptitude to solve given problems and not one's ability to adapt, have retention and sustain information over prolonged period of time (learning).


The concept of g factor is also pointless, because if researchers don't know what this mysterious g is then it's redundant to draw any conclusions from such an unknown factor. I personally don't believe g exists and that human intelligence is merely the integration of cognitive abilities such as reasoning, working memory & etc in response to processing different kinds of information such as verbal, spatial, visual or social. IQ tests only measures the visual-spatial processing and working memory part.

Anyway, my own opinion is that researchers in intelligence are not very intelligent themselves which is why the concept of IQ is is not very convincing to the public. And trust me, I have talked to all the pioneers in intelligence such as Richard Lynn, Nyborg, Roberto Colom, Wendy Johnson & Scott Barry Kaufman.


Doe1994 (talk) 04:27, 26 December 2015 (UTC)User:Doe1994Doe1994 (talk) 04:27, 26 December 2015 (UTC)[reply]



User:Bzzzing

Regarding the Fractioning intelligence paper


Regarding that study and the concept of g, I don't understand why on Earth researchers would think there is a higher-order intelligence or "general factor". The fact that this was deduced by past researchers on the correlation among paper IQ tests just automatically puts this concept at doubt, since correlations is not causation. (also all the IQ tests measure the same thing - Visual-spatial ability)


Even evolutionary wise, it wouldn't make sense for there to be higher order cognitive system. Intelligence probably evolved separately as different cognitive abilities in response to processing information in different environments and therefore different systems for different cognitive abilities would have evolved over time for humans. What would have driven humans to evolve a separate higher system such as general factor? Psychometrists can't answer this and neither can they define what general factor even is. My own proposition is that there are different systems for different cognitive/intelligence abilities such as --visual-spatial intelligence which probably evolved as a response to early hominid visual-spatial navigation & hunting while verbal intelligence evolved as a response to in-group communication, conversations and social dynamics. There is no requirement for this concept of g.

Doe1994 (talk) 06:43, 27 December 2015 (UTC)User:Doe1994Doe1994 (talk) 06:43, 27 December 2015 (UTC)[reply]

The Theory of multiple intelligences is referenced in the section just preceding where neuroimaging was put. Neuroimaging was a bad name for the section. I think perhaps the theory of multiple intelligences should have more informative section name for it. However as to all this business about personally knowing people and them not answering and having your own thoughts on the matter - that is all irrelevant to putting something in the article. The article needs to be based on citations with due weight. The article about multiple intelligences is not very supportive of iit, if you hhave citations which show something else please do add them. However when I reaad the first citation that was added to the 'neuroimaging' section it did not really support what was said here. Yes it said different intelligences seemed to be supported by different parts of the brain but it also talked about general intellignce as in recruiting the various parts to work together. Dmcq (talk) 12:37, 27 December 2015 (UTC)[reply]

User:Dmcq

Fractioning intelligence paper had nothing to do with Theory of Multiple Intelligences, it had to do with the testing construct of the IQ tests such as Raven's or WAIS. Current psychologists think that short term memory, reasoning and verbal ability can all be measured in one test but the Fractioning intelligence paper pointed out that this assertion is superficial because each of those abilities reside in separate networks in the brain and therefore requires three separate tests in order to accurately measure them. Or in other words, the current IQ tests do not measure the full capacity & efficiency of those three networks.


Shootingstar88 (talk) 00:24, 28 December 2015 (UTC)User:shootingstar88[reply]

'Current psychologists think that short term memory, reasoning and verbal ability can all be measured in one test'? What gives you that impression? Or that even any of those can't be broken down more? Or that it makes much difference as far as this article is concerned? Dmcq (talk) 00:35, 28 December 2015 (UTC)[reply]


User:Dmcq

Because IQ test questions are not constructed in way as to measure the capacity for short term memory, reasoning and verbal ability. They are constructed in a way to measure ability to solve novel problems regardless of how much short term memory, reasoning or verbal ability it takes. Therefore it does not actually measure capacity of one's core utilities of intelligence which the Fractioning paper defines as three cognitive abilities I had listed above. It's like measuring a person's ability perform a novel physical tasks in order to generalize about his athletic ability, without actually measuring the full capacity of his stamina, endurance, speed and flexibility. Do you understand?

I go even further to suggest that IQ tests do not measure verbal ability..period. There is no indication that WAIS measures any verbal fluid ability while it is already established that Raven only measures visual-spatial ability. Current literature also categorizes Arithmetic as "verbal" even though it requires spatial-visualization and mental rotation both of which are spatial abilities. Therefore in line with common popular assumption that IQ test are flawed and superficial.

User:shootingstar88 —Preceding undated comment added 19:43, 28 December 2015 (UTC)[reply]

You seem to be reading a lot more into the paper than is there. It is an interesting paper and the technique seems useful. However you seem to think that because they have produced evidence that the the three factors they extracted account for a large fraction of IQ test scores that therefore IQ tests should be changed to specifically measure those factors. That simply does not follow. More importantly it is not what the authors said. We really have to wait till some author says they are criticizing the IQ test before we write that they are. I think though it would be okay to write down what they said as a view about IQ though as being composed of number of factors. They did have something to say about g factor though in that they saw little evidence of a single g factor. Dmcq (talk) 14:35, 29 December 2015 (UTC)[reply]

How can we do with all of them

abstract IQ make new notes — Preceding unsigned comment added by 80.157.80.122 (talk) 12:29, 21 April 2016 (UTC)[reply]

Hampshire et al. study

There's 100+ years of research on the construct of IQ, and this research is described in numerous textbooks, review articles, meta-analyses, etc. In light of that, can someone explain to me why there is a need to cover the study by Hampshire et al. [2] in this article? Can you at least cite some scholarly secondary sources that describe and contextualize the study, instead of that Independent article that mostly consists of pompous and absurd statements from the study's authors? Why do you think that a study of 13 individuals (that's their sample size in the brain scan part) should be discussed in Wikipedia?

There's also the fact that the Hampshire et al. study has a peculiar history, as described in Haier et al. The editors of the journal Neuron, where the study was published, lacked any expertise in psychometrics or intelligence research, so they commissioned an outside expert, the psychometrician Richard Haier, to write a commentary on the study before it was published. However, Haier, together with some colleagues, concluded that the study was highly flawed and said that it shouldn't be published without major revision. Neuron's editors, however, rejected this advise and published the study essentially unchanged and refused to publish Haier's highly critical commentary.

Later, Hampshire et al. had back-and-forths about the study with psychometricians in the journals Personality and Individual Differences and Intelligence. These psychometricians, experts in the very topic of the structure of intelligence, rejected the argument of the study on numerous grounds.

It's quite clear that discussing this study gives it undue weight as it is not covered, and will likely not be covered, by any major reviews or textbooks discussing cognitive ability. There's a long history of researchers challenging the g theory, from Thomson and Thurstone to Horn and van der Maas. This history can be discussed in this article if needed and there are many reliable sources documenting it, but there's no reason to give inconsequential self-promoters like Hampshire any space here.--Victor Chmara (talk) 08:46, 24 May 2016 (UTC)[reply]


  • Disagree: Why should we ignore actual Science in favor of Psychology? "Psychometrics" isn't a science. It's practitioners are mostly psychologists, not neuroscientists. The article was published in Neuron, a neuroscience journal. Secondly, just because something has been considered valid for "100+" years, doesn't mean it can't be wrong. You're also ignoring that the sample size of the total study was 46,000+, selected out of 100,000+. If you're going to ignore the questionnaire sample set completely, then you might as well ignore all of psychometrics, because that's basically what it's based on in the first place. The actual neuroimaging sample set was an additional add on to verify the findings. cӨde1+6TP 11:23, 24 May 2016 (UTC)[reply]
Let's get real. Psychometrics is a mature science that produces highly replicable results with large effect sizes. Neuroscience, in contrast, is an immature field that, as any honest neuroscientist will admit, struggles with reproducibility and lack of basic statistical understanding among its practitioners[3]. Neuroscientific measures are far away from challenging behavioral measures in the prediction and understanding of behavior.
Neuroscience methods can be profitably combined with psychometrics but that requires understanding of both fields, something that Hampshire et al. lack. Haier, for example, has published a number of studies that use brain imaging methods, but he would not, in this day and age, publish a study with N=13, and certainly would not make far-ranging claims based on such meager evidence.
The brain imaging part of Hampshire et al. is the only part of the study that has any hope of providing new evidence. The fact that you think that the brain data was only "an additional add on to verify the findings" means that you don't understand the study at all and shouldn't be commenting on this.
As to their behavioral data, they are of the type that are a dime a dozen in differential and educational psychology, although tests with such poor psychometric properties as those of Hampshire et al. are unusual. Note that when Ashton et al.[4] compared the fit of a standard higher-order g-factor model to that of their parameterization of the Hampshire model using a correlation matrix provided by Hampshire, the fit of the g model was clearly superior. Therefore, Hampshire et al's own behavioral data provides strong evidence against their model.--Victor Chmara (talk) 12:29, 24 May 2016 (UTC)[reply]
  • I think the Hampshire et al study is a reasonable inclusion - the back and forth between them and the psychometricians can be summarized as well. It should of course not include it the way it was originally included, and it probably doesnt merit more than a couple of sentences. But, psychometricians do not have a monopoly on studying intelligence (educational research, cognitive psychology, neuroscience, information sciences, AI, philosophy), and frankly it seems absurdly to me to think that neuroscience and cognitive science has nothing to contribute to our understanding of intelligence. I will refrain from giving my own opinions about psychometrics, and hence not respond to Victor Chmara's descriptions of the field. ·maunus · snunɐɯ· 14:01, 24 May 2016 (UTC)[reply]
The only things notable about the study are the way Neuron bungled the peer review process and the absurd media campaign Hampshire et al. waged. There are hundreds of studies on the neuroscience of IQ differences, most of them better than Hampshire's, with sample sizes larger than 13, but there's no reason to discuss any individual study in this article. As this article is about IQ and not intelligence in general, psychometric research is inevitably at the foreground, but views from other disciplines can of course also be incorporated, provided that they meet normal Wikipedia requirements.
The reason the Hampshire study caused debate in differential psych journals was that the strong claims made in it were inconsistent with the weak evidence presented. Nothing very interesting emerged from the debate, and the topics discussed -- factor rotation, ergodicity, selection effects, etc. -- are not a good fit for a general article like this.--Victor Chmara (talk) 14:33, 24 May 2016 (UTC)[reply]
One helpful suggestion I received when I mentioned this article as an entry for the latest Core Contest is that this article is way too long by Wikipedia article length guidelines. We should be using hypertext and summary style more here to actually shorten the article, not dump into it paragraphs after paragraphs of text about unreplicated primary research studies or fringe views on IQ testing (pro or con). Victor is correct that there is plenty of reliable secondary scholarly literature on this article's topic (always new textbooks and handbooks coming out, which I find in libraries and mention here on the article talk page from time to time). We should use resources like those to improve the article, rather than cherry-picking primary research publications mentioned in the latest press release. That's simply upholding the Wikipedia guideline on reliable sources. -- WeijiBaikeBianji (Watch my talk, How I edit) 17:54, 24 May 2016 (UTC)[reply]

The scholarly consensus is that genetics is more important than environment in determining intellectual standing

This article states "Environmental and genetic factors play a role in determining IQ. Their relative importance has been the subject of much research and debate." There is some debate about which is more important, and environment certainly plays a role, but the consensus among scholars is overwhelming that genetics is more important than environment in determining an individual person's intellect if a person is in a normal environment (an exception would be be if a person was starved, tortured or sleep deprived, in those cases environment would probably be more important because that does real damage to intelligence and that is something mainstream scholars accept as an exception, another exception some mainstream scholars except is a person who is part of a discriminated against group, such as African Americans being given inferior education, but mainstream scholars generally think that a person who is not suffering from one of those issues is going to have their intelligence determined more by genetics than environment). I believe the consensus among scholars leans more in Hans Eysenck's direction than Stephen Jay Gould's with regards to individual differences in intelligence. So I think the article should be changed to not give undue weight to the environmentalist view, which is a minority view at this point. I'm going to review Eysenck's writing and some other writing I've looked at to back up my point that the consensus is that genes are more important than environment in individual differences in normal environments. RandomScholar30 (talk) 02:40, 27 May 2016 (UTC)[reply]

Eysenck stated in Intelligence: A New Look "It has been known for many years that heredity contributes more than environment to differences in IQ, but recent years have brought forth a veritable flood of evidence to support and strengthen this early finding. " Eysenck, HJ Intelligence: A New Look Transaction Publishers page 9 So Eysenck was stating that the scholarly consensus held his view that intelligence was primarily genetic. I will look for more evidence though and provide it as evidence in favor of changing what that sentence says. RandomScholar30 (talk) 03:04, 27 May 2016 (UTC)[reply]
Eysenck also quoted in his autobiography a statement that said "On the whole scholars with any expertise in the area of intelligence and intelligence testing (defined very broadly) share a common view of the most important components of intelligence and are convinced that it can be measured with some degree of accuracy. An overwhelming majority also believe that individual genetic inheritance contributes to variations in IQ within the white community...". That is from page 290 of Eysenck's Rebel with a Cause Transaction Publishers 1990 [5]. The context of who made the statement was not on the page quoted from, I own a copy of it but don't want to dig it out right now, I will later. The point is this supports my point that there is already a scholarly consensus that genetics is more important than environment for intelligence. RandomScholar30 (talk) 03:11, 27 May 2016 (UTC)[reply]
This meta-analysis from 2014 says, "Taken together, these findings appear to be most consistent with transactional models of cognitive development that emphasize gene–environment correlation and interaction." It does not appear that there is overwhelming consensus, so I would say the current wording is an accurate reflection of the current status of the debate. And actually this meta-analysis highlights the different views quite well, so I'm going to add it as a source for that sentence you quoted. PermStrump(talk) 03:51, 27 May 2016 (UTC)[reply]
I've read Eysenck's book. (I've used at least one of Eysenck's books as a source for a related Wikipedia article.) But you owe it to yourself to read newer books, because this is an ongoing field of research, and Eysenck has been dead for a while. -- WeijiBaikeBianji (Watch my talk, How I edit) 03:56, 27 May 2016 (UTC)[reply]