intelligence, human

intelligence, human

Introduction

      mental quality that consists of the abilities to learn from experience, adapt to new situations, understand and handle abstract concepts, and use knowledge to manipulate one (human being)'s environment.

 Much of the excitement among investigators in the field of intelligence derives from their attempts to determine exactly what intelligence is. Different investigators have emphasized different aspects of intelligence in their definitions. For example, in a 1921 symposium the American psychologists Lewis M. Terman (Terman, Lewis Madison) and Edward L. Thorndike (Thorndike, Edward L.) differed over the definition of intelligence, Terman stressing the ability to think abstractly and Thorndike emphasizing learning and the ability to give good responses to questions. More recently, however, psychologists (psychology) have generally agreed that adaptation to the environment is the key to understanding both what intelligence is and what it does. Such adaptation may occur in a variety of settings: a student in school learns the material he needs to know in order to do well in a course; a physician treating a patient with unfamiliar symptoms learns about the underlying disease; or an artist reworks a painting to convey a more coherent impression. For the most part, adaptation involves making a change in oneself in order to cope more effectively with the environment, but it can also mean changing the environment or finding an entirely new one.

      Effective adaptation draws upon a number of cognitive processes, such as perception, learning, memory, reasoning, and problem solving (thought). The main emphasis in a definition of intelligence, then, is that it is not a cognitive or mental process per se but rather a selective combination of these processes that is purposively directed toward effective adaptation. Thus, the physician who learns about a new disease adapts by perceiving material on the disease in medical literature, learning what the material contains, remembering the crucial aspects that are needed to treat the patient, and then utilizing reason to solve the problem of applying the information to the needs of the patient. Intelligence, in total, has come to be regarded not as a single ability but as an effective drawing together of many abilities. This has not always been obvious to investigators of the subject, however; indeed, much of the history of the field revolves around arguments regarding the nature and abilities that constitute intelligence.

Theories of intelligence
      Theories of intelligence, as is the case with most scientific theories, have evolved through a succession of models. Four of the most influential paradigms have been psychological measurement, also known as psychometrics; cognitive psychology, which concerns itself with the processes by which the mind functions; cognitivism and contextualism, a combined approach that studies the interaction between the environment and mental processes; and biological science, which considers the neural bases of intelligence. What follows is a discussion of developments within these four areas.

Psychometric (psychological testing) theories
      Psychometric theories have generally sought to understand the structure of intelligence: What form does it take, and what are its parts, if any? Such theories have generally been based on and established by data obtained from tests of mental abilities, including analogies (e.g., lawyer is to client as doctor is to __), classifications (e.g., Which word does not belong with the others? robin, sparrow, chicken, blue jay), and series completions (e.g., What number comes next in the following series? 3, 6, 10, 15, 21,_).

      Psychometric theories are based on a model that portrays intelligence as a composite of abilities measured by mental tests. This model can be quantified. For example, performance on a number-series test might represent a weighted composite of number, reasoning, and memory abilities for a complex series. Mathematical models allow for weakness in one area to be offset by strong ability in another area of test performance. In this way, superior ability in reasoning can compensate for a deficiency in number ability.

      One of the earliest of the psychometric theories came from the British psychologist Charles E. Spearman (Spearman, Charles E.) (1863–1945), who published his first major article on intelligence in 1904. He noticed what may seem obvious now—that people who did well on one mental-ability test tended to do well on others, while people who performed poorly on one of them also tended to perform poorly on others. To identify the underlying sources of these performance differences, Spearman devised factor analysis (psychological testing), a statistical technique that examines patterns of individual differences in test scores. He concluded that just two kinds of factors underlie all individual differences in test scores. The first and more important factor, which he labeled the “general factor,” or g, pervades performance on all tasks requiring intelligence. In other words, regardless of the task, if it requires intelligence, it requires g. The second factor is specifically related to each particular test. For example, when someone takes a test of arithmetical reasoning, his performance on the test requires a general factor that is common to all tests (g) and a specific factor that is related to whatever mental operations are required for mathematical reasoning as distinct from other kinds of thinking. But what, exactly, is g? After all, giving something a name is not the same as understanding what it is. Spearman did not know exactly what the general factor was, but he proposed in 1927 that it might be something like “mental energy.”

      The American psychologist L.L. Thurstone (Thurstone, L.L.) disagreed with Spearman's theory, arguing instead that there were seven factors, which he identified as the “primary mental abilities.” These seven abilities, according to Thurstone, were verbal comprehension (as involved in the knowledge of vocabulary and in reading), verbal fluency (as involved in writing and in producing words), number (as involved in solving fairly simple numerical computation and arithmetical reasoning problems), spatial visualization (as involved in visualizing and manipulating objects, such as fitting a set of suitcases into an automobile trunk), inductive reasoning (as involved in completing a number series or in predicting the future on the basis of past experience), memory (as involved in recalling people's names or faces, and perceptual speed (as involved in rapid proofreading to discover typographical errors in a text).

      Although the debate between Spearman and Thurstone has remained unresolved, other psychologists—such as Canadian Philip E. Vernon and American Raymond B. Cattell (Cattell, Raymond B.)—have suggested that both were right in some respects. Vernon and Cattell viewed intellectual abilities as hierarchical, with g, or general ability, located at the top of the hierarchy. But below g are levels of gradually narrowing abilities, ending with the specific abilities identified by Spearman. Cattell, for example, suggested in Abilities: Their Structure, Growth, and Action (1971) that general ability can be subdivided into two further kinds, “fluid” and “crystallized.” Fluid abilities are the reasoning and problem-solving abilities measured by tests such as analogies, classifications, and series completions. Crystallized abilities, which are thought to derive from fluid abilities, include vocabulary, general information, and knowledge about specific fields. The American psychologist John L. Horn suggested that crystallized abilities more or less increase over a person's life span, whereas fluid abilities increase in earlier years and decrease in later ones.

      Most psychologists agreed that Spearman's subdivision of abilities was too narrow, but not all agreed that the subdivision should be hierarchical. The American psychologist Joy Paul Guilford (Guilford, Joy Paul) proposed a structure-of-intellect theory, which in its earlier versions postulated 120 abilities. In The Nature of Human Intelligence (1967), Guilford argued that abilities can be divided into five kinds of operation, four kinds of content, and six kinds of product. These facets can be variously combined to form 120 separate abilities. An example of such an ability would be cognition (operation) of semantic (content) relations (product), which would be involved in recognizing the relation between lawyer and client in the analogy problem above (lawyer is to client as doctor is to __). Guilford later increased the number of abilities proposed by his theory to 150.

      Eventually it became apparent that there were serious problems with the basic approach to psychometric theory. A movement that had started by postulating one important ability had come, in one of its major manifestations, to recognize 150. Moreover, the psychometricians (as practitioners of factor analysis were called) lacked a scientific means of resolving their differences. Any method that could support so many theories seemed somewhat suspect. Most important, however, the psychometric theories failed to say anything substantive about the processes underlying intelligence. It is one thing to discuss “general ability” or “fluid ability” but quite another to describe just what is happening in people's minds when they are exercising the ability in question. The solution to these problems, as proposed by cognitive psychologists, was to study directly the mental processes underlying intelligence and, perhaps, to relate them to the facets of intelligence posited by psychometricians.

      The American psychologist John B. Carroll, in Human Cognitive Abilities (1993), proposed a “three-stratum” psychometric model of intelligence that expanded upon existing theories of intelligence. Many psychologists regard Carroll's model as definitive, because it is based upon reanalyses of hundreds of data sets. In the first stratum, Carroll identified narrow abilities (roughly 50 in number) that included the seven primary abilities identified by Thurstone. According to Carroll, the middle stratum encompassed broad abilities (approximately 10) such as learning, retrieval ability, speediness, visual perception, fluid intelligence, and the production of ideas. The third stratum consisted solely of the general factor, g, as identified by Spearman. It might seem self-evident that the factor at the top would be the general factor, but it is not, since there is no guarantee that there is any general factor at all.

      Both traditional and modern psychometric theories face certain problems. First, it has not been proved that a truly general ability encompassing all mental abilities actually exists. In The General Factor of Intelligence: How General Is It? (2002), edited by the psychologists Robert Sternberg (author of this article) and Elena Grigorenko, contributors to the edited volume provided competing views of the g factor, with many suggesting that specialized abilities are more important than a general ability, especially because they more readily explain individual variations in intellectual functioning. Second, psychometric theories cannot precisely characterize all that goes on in the mind. Third, it is not clear whether the tests on which psychometric theories are based are equally appropriate in all cultures. In fact, there is an assumption that successful performance on a test of intelligence or cognitive ability will depend on one's familiarity with the cultural framework of those who wrote the test. In her 1997 paper You Can't Take It with You: Why Ability Assessments Don't Cross Cultures, the American psychologist Patricia M. Greenfield concluded that a single test may measure different abilities in different cultures. Her findings emphasized the importance of taking issues of cultural generality into account when creating abilities tests.

Cognitive (cognition) theories
      During the era dominated by psychometric theories, the study of intelligence was influenced most by those investigating individual differences in people's test scores. In an address to the American Psychological Association in 1957, the American researcher Lee Cronbach, a leader in the testing field, decried the lack of common ground between psychologists who studied individual differences and those who studied commonalities in human behaviour. Cronbach's plea to unite the “two disciplines of scientific psychology” led, in part, to the development of cognitive theories of intelligence and of the underlying processes posited by these theories. (See also pedagogy: cognitive theories (pedagogy).)

      Fair assessments of performance require an understanding of the processes underlying intelligence; otherwise, there is a risk of arriving at conclusions that are misleading, if not simply wrong, when evaluating overall test scores or other assessments of performance. Suppose, for example, that a student performs poorly on the verbal analogies questions in a psychometric test. One possible conclusion is that the student does not reason well. An equally plausible interpretation, however, is that the student does not understand the words or is unable to read them in the first place. A student who fails to solve the analogy “audacious is to pusillanimous as mitigate is to __” might be an excellent reasoner but have only a modest vocabulary, or vice versa. By using cognitive analysis, the test interpreter is able to determine the degree to which the poor score stems from low reasoning ability and the degree to which it results from not understanding the words.

      Underlying most cognitive approaches to intelligence is the assumption that intelligence comprises mental representations (such as propositions or images) of information and processes that can operate on such representations. A more-intelligent person is assumed to represent information more clearly and to operate faster on these representations. Researchers have sought to measure the speed of various types of thinking. Through mathematical modeling, they divide the overall time required to perform a task into the constituent times needed to execute each mental process. Usually, they assume that these processes are executed serially (one after another) and, hence, that the processing times are additive. But some investigators allow for parallel processing, in which more than one process is executed at the same time. Regardless of the type of model used, the fundamental unit of analysis is the same—that of a mental process acting upon a mental representation.

      A number of cognitive theories of intelligence have been developed. Among them is that of the American psychologists Earl B. Hunt, Nancy Frost, and Clifford E. Lunneborg, who in 1973 showed one way in which psychometrics and cognitive modeling could be combined. Instead of starting with conventional psychometric tests, they began with tasks that experimental psychologists were using in their laboratories to study the basic phenomena of cognition, such as perception, learning, and memory. They showed that individual differences in these tasks, which had never before been taken seriously, were in fact related (although rather weakly) to patterns of individual differences in psychometric intelligence test scores. Their results suggested that the basic cognitive processes are the building blocks of intelligence.

      The following example illustrates the kind of task Hunt and his colleagues studied in their research: the subject is shown a pair of letters, such as “A A,” “A a,” or “A b.” The subject's task is to respond as quickly as possible to one of two questions: “Are the two letters the same physically?” or “Are the two letters the same only in name?” In the first pair the letters are the same physically, and in the second pair the letters are the same only in name.

      The psychologists hypothesized that a critical ability underlying intelligence is the rapid retrieval of lexical information, such as letter names, from memory. Hence, they were interested in the time needed to react to the question about letter names. By subtracting the reaction time to the question about physical match from the reaction time to the question about name match, they were able to isolate and set aside the time required for sheer speed of reading letters and pushing buttons on a computer. They found that the score differences seemed to predict psychometric test scores, especially those on tests of verbal ability such as reading comprehension. Hunt, Frost, and Lunneborg concluded that verbally facile people are those who are able to absorb and then retrieve from memory large amounts of verbal information in short amounts of time. The time factor was the significant development in this research.

      A few years later, Sternberg suggested an alternative approach that could resolve the weak relation between cognitive tasks and psychometric test scores. He argued that Hunt and his colleagues had tested for tasks that were limited to low-level cognitive processes. Although such processes may be involved in intelligence, Sternberg claimed that they were peripheral rather than central. He recommended that psychologists study the tasks found on intelligence tests and then identify the mental processes and strategies people use to perform those tasks.

      Sternberg began his study with the analogies cited earlier: “lawyer is to client as doctor is to __.” He determined that the solution to such analogies requires a set of component cognitive processes that he identified as follows: encoding of the analogy terms (e.g., retrieving from memory attributes of the terms lawyer, client, and so on); inferring the relation between the first two terms of the analogy (e.g., figuring out that a lawyer provides professional services to a client); mapping this relation to the second half of the analogy (e.g., figuring out that both a lawyer and a doctor provide professional services); applying this relation to generate a completion (e.g., realizing that the person to whom a doctor provides professional services is a patient); and then responding. By applying mathematical modeling techniques to reaction-time data, Sternberg isolated the components of information processing. He determined whether each experimental subject did, indeed, use these processes, how the processes were combined, how long each process took, and how susceptible each process was to error. Sternberg later showed that the same cognitive processes are involved in a wide variety of intellectual tasks. He subsequently concluded that these and other related processes underlie scores on intelligence tests.

      A different approach was taken in the work of the British psychologist Ian Deary, among others. He argued that inspection time is a particularly useful means of measuring intelligence. It is thought that individual differences in intelligence may derive in part from differences in the rate of intake and processing of simple stimulus information. In the inspection-time task, a person looks at two vertical lines of unequal length and is asked to identify which of the two is longer. Inspection time is the length of time of stimulus presentation each individual needs in order to discriminate which of the two lines is the longest. Some research suggests that more-intelligent individuals are able to discriminate the lengths of the lines in shorter inspection times.

      Other cognitive psychologists have studied human intelligence by constructing computer models of human cognition. Two leaders in this field were the American computer scientists Allen Newell (Newell, Allen) and Herbert A. Simon (Simon, Herbert A.). In the late 1950s and early '60s, they worked with computer expert Cliff Shaw to construct a computer model of human problem solving. Called the General Problem Solver, it could find solutions to a wide range of fairly structured problems, such as logical proofs and mathematical word problems. This research, based on a heuristic procedure called “means-ends analysis,” led Newell and Simon to propose a general theory of problem solving in 1972. (See also Thought: Types of thinking (thought).)

      Most of the problems studied by Newell and Simon were fairly well structured, in that it was possible to identify a discrete set of steps that would lead from the beginning to the end of a problem. Other investigators have been concerned with other kinds of problems, such as how a text is comprehended or how people are reminded of things they already know when reading a text. The psychologists Marcel Just and Patricia Carpenter, for example, showed that complicated intelligence-test items, such as figural matrix problems involving reasoning with geometric shapes, could be solved by a sophisticated computer program at a level of accuracy comparable to that of human test takers. It is in this way that a computer reflects a kind of “intelligence” similar to that of humans. One critical difference, however, is that programmers structure the problems for the computer, and they also write the code that enables the computer to solve the problems. Humans “encode” their own information and do not have personal programmers managing the process for them. To the extent that there is a “programmer,” it is in fact the person's own brain.

      All of the cognitive theories described so far rely on what psychologists call the “serial processing of information,” meaning that in these examples, cognitive processes are executed in series, one after another. Yet the assumption that people process chunks of information one at a time may be incorrect. Many psychologists have suggested instead that cognitive processing is primarily parallel. It has proved difficult, however, to distinguish between serial and parallel models of information processing (just as it had been difficult earlier to distinguish between different factor models of human intelligence). Advanced techniques of mathematical and computer modeling were later applied to this problem. Possible solutions have included “parallel distributed processing” models of the mind, as proposed by the psychologists David E. Rumelhart and Jay L. McClelland. These models postulated that many types of information processing occur within the brain at once, rather than just one at a time.

      Computer modeling has yet to resolve some major problems in understanding the nature of intelligence, however. For example, the American psychologist Michael E. Cole and other psychologists have argued that cognitive processing does not accommodate the possibility that descriptions of intelligence may differ from one culture to another and across cultural subgroups. Moreover, common experience has shown that conventional tests, even though they may predict academic performance, cannot reliably predict the way in which intelligence will be applied (i.e., through performance in jobs or other life situations beyond school). In recognition of the difference between real-life and academic performance, then, psychologists have come to study cognition not in isolation but in the context of the environment in which it operates.

Cognitive-contextual theories
      Cognitive-contextual theories deal with the way that cognitive processes operate in various settings. Two of the major theories of this type are that of the American psychologist Howard Gardner and that of Sternberg. In 1983 Gardner challenged the assumption of a single intelligence by proposing a theory of “multiple intelligences.” Earlier theorists had gone so far as to contend that intelligence comprises multiple abilities. But Gardner went one step farther, arguing that intelligences are multiple and include, at a minimum, linguistic, logical-mathematical, spatial, musical, bodily-kinesthetic, interpersonal, and intrapersonal intelligence.

      Some of the intelligences proposed by Gardner resembled the abilities proposed by psychometric theorists, but others did not. For example, the idea of a musical intelligence was relatively new, as was the idea of a bodily-kinesthetic intelligence, which encompassed the particular abilities of athletes and dancers. Gardner derived his set of intelligences chiefly from studies of cognitive processing, brain damage, exceptional individuals, and cognition across cultures. He also speculated on the possibility of an existential intelligence (a concern with “ultimate” issues, such as the meaning of life), although he was unable to isolate an area of the brain that was dedicated to the consideration of such questions. Gardner's research on multiple intelligences led him to claim that most concepts of intelligence had been ethnocentric and culturally biased but that his was universal, because it was based upon biological and cross-cultural data as well as upon data derived from the cognitive performance of a wide array of people.

      An alternative approach that took similar account of cognition and cultural context was Sternberg's “triarchic” theory, which he proposed in Beyond IQ: A Triarchic Theory of Human Intelligence (1985). Both Gardner and Sternberg believed that conventional notions of intelligence were too narrow; Sternberg, however, questioned how far psychologists should go beyond traditional concepts, suggesting that musical and bodily-kinesthetic abilities are talents rather than intelligences because they are fairly specific and are not prerequisites for adaptation in most cultures.

      Sternberg posited three (“triarchic”) integrated and interdependent aspects of intelligence, which are concerned, respectively, with a person's internal world, the external world, and experience. The first aspect comprises the cognitive processes and representations that form the core of all thought. The second aspect consists of the application of these processes and representations to the external world. The triarchic theory holds that more-intelligent persons are not just those who can execute many cognitive processes quickly or well; rather, their greater intelligence is reflected in knowing their strengths and weaknesses and capitalizing upon their strengths while compensating for their weaknesses. More-intelligent persons, then, find a niche in which they can operate most efficiently. The third aspect of intelligence consists of the integration of the internal and external worlds through experience. This includes the ability to apply previously learned information to new or wholly unrelated situations.

      Some psychologists believe that intelligence is reflected in an ability to cope with relatively novel situations. This explains why experience can be so important. For example, intelligence might be measured by placing people in an unfamiliar culture and assessing their ability to cope with the new situation. According to Sternberg, another facet of experience that is important in evaluating intelligence is the automatization of cognitive processing, which occurs when a relatively novel task becomes familiar. The more a person automatizes the tasks of daily life, the more mental resources he will have for coping with novelty.

      Other intelligences were proposed in the late 20th century. In 1990 the psychologists John Mayer and Peter Salovey defined the term emotional intelligence as

the ability to perceive emotions, to access and generate emotions so as to assist thought, to understand emotions and emotional knowledge, and to reflectively regulate emotions so as to promote emotional and intellectual growth.

      The four aspects identified by Mayer and Salovey involve (a) recognizing one's own emotions as well as the emotions of others, (b) applying emotion appropriately to facilitate reasoning, (c) understanding complex emotions and their influence on succeeding emotional states, and (d) having the ability to manage one's emotions as well as those of others. The concept of emotional intelligence was popularized by the psychologist and journalist Daniel Goleman in books published from the 1990s. Several tests developed to measure emotional intelligence have shown modest correlations between emotional intelligence and conventional intelligence.

Biological (biology) theories
      The theories discussed above seek to understand intelligence in terms of hypothetical mental constructs, whether they are factors, cognitive processes, or cognitive processes in interaction with context. Biological theories represent a radically different approach that dispenses with mental constructs altogether. Advocates of such theories, usually called reductionists, believe that a true understanding of intelligence is possible only by identifying its biological basis. Some would argue that there is no alternative to reductionism if, in fact, the goal is to explain rather than merely to describe behaviour. But the case is not an open-and-shut one, especially if intelligence is viewed as something more than the mere processing of information. As Howard Gardner pointedly asked in the article What We Do & Don't Know About Learning (2004):

Can human learning and thinking be adequately reduced to the operations of neurons, on the one hand, or to chips of silicon, on the other? Or is something crucial missing, something that calls for an explanation at the level of the human organism?

      Analogies that compare the human brain to a computer suggest that biological approaches to intelligence should be viewed as complementary to, rather than as replacing, other approaches. For example, when a person learns a new German vocabulary word, he becomes aware of a pairing, say, between the German term Die Farbe and the English word colour, but a trace is also laid down in the brain that can be accessed when the information is needed. Although relatively little is known about the biological bases of intelligence, progress has been made on three different fronts, all involving studies of brain operation.

Hemispheric studies
      One biological approach has centred upon types of intellectual performance as they relate to the regions of the brain from which they originate. In her research on the functions of the brain's two hemispheres, the psychologist Jerre Levy and others found that the left hemisphere is superior in analytical tasks, such as are involved in the use of language, while the right hemisphere is superior in many forms of visual and spatial tasks. Overall, the right hemisphere tends to be more synthetic and holistic in its functioning than the left. Nevertheless, patterns of hemispheric specialization are complex and cannot easily be generalized.

 The specialization of the two hemispheres of the brain is exemplified in an early study by Levy and the American neurobiologist Roger W. Sperry (Sperry, Roger Wolcott), who worked with split-brain patients—that is, individuals whose corpus callosum had been severed. Because the corpus callosum links the two hemispheres in a normal brain, in these patients the hemispheres function independently of each other.

      Levy and Sperry asked split-brain patients to hold small wooden blocks, which they could not see, in either their left or their right hand and to match them with corresponding two-dimensional pictures. They found that patients using the left hand did better at this task than those using the right; but, of more interest, they found that the two groups of patients appeared to use different strategies in solving the problem. Their analysis demonstrated that the right hand (dominated by the left hemisphere of the brain) functioned better with patterns that are readily described in words but are difficult to discriminate visually. In contrast, the left hand (dominated by the right hemisphere) was more adept with patterns requiring visual discrimination.

Brain-wave studies
      A second front of biological research has involved the use of brain-wave recordings. The German-born British psychologist Hans Eysenck, for example, studied brain patterns and speed of response in people taking intelligence tests. Earlier brain-wave research had studied the relation between these waves and performance on ability tests or in various cognitive tasks. Researchers in some of these studies found a relationship between certain aspects of electroencephalogram (electroencephalography) (EEG) waves, event-related-potential (ERP) waves, and scores on a standard psychometric test of intelligence.

Blood-flow (blood) studies
      A third and more recent front of research involves the measurement of blood flow in the brain, which is a fairly direct indicator of functional activity in brain tissue. In such studies the amount and location of blood flow in the brain is monitored while subjects perform cognitive tasks. The psychologist John Horn, a prominent researcher in this area, found that older adults (old age) show decreased blood flow to the brain, that such decreases are greater in some areas of the brain than in others, and that the decreases are particularly notable in those areas responsible for close concentration, spontaneous alertness, and the encoding of new information. Using positron emission tomography (PET), the psychologist Richard Haier found that people who perform better on conventional intelligence tests often show less activation in relevant portions of the brain than do those who perform less well. In addition, neurologists Antonio Damasio and Hannah Damasio and their colleagues used PET scans and magnetic resonance imaging (MRI) to study brain function in subjects performing problem-solving tasks. These findings affirmed the importance of understanding intelligence as a faculty that develops over time.

Development (biological development) of intelligence
      There have been a number of approaches to the study of the development of intelligence. Psychometric theorists, for instance, have sought to understand how intelligence develops in terms of changes in intelligence factors and in various abilities in childhood. For example, the concept of mental age was popular during the first half of the 20th century. A given mental age was held to represent an average child (child psychology)'s level of mental functioning for a given chronological age. Thus, an average 12-year-old would have a mental age of 12, but an above-average 10-year-old or a below-average 14-year-old might also have a mental age of 12 years. The concept of mental age fell into disfavour, however, for two apparent reasons. First, the concept does not seem to work after about the age of 16. The mental test performance of, say, a 25-year-old is generally no better than that of a 24- or 23-year-old, and in later adulthood some test scores seem to start declining. Second, many psychologists believe that intellectual development does not exhibit the kind of smooth continuity that the concept of mental age appears to imply. Rather, development seems to come in intermittent bursts, whose timing can differ from one child to another.

The work of Jean Piaget (Piaget, Jean)
      The landmark work in intellectual development in the 20th century derived not from psychometrics but from the tradition established by the Swiss psychologist Jean Piaget (Piaget, Jean). His theory was concerned with the mechanisms by which intellectual development takes place and the periods through which children develop. Piaget believed that the child explores the world and observes regularities and makes generalizations—much as a scientist does. Intellectual development, he argued, derives from two cognitive processes that work in somewhat reciprocal fashion. The first, which he called assimilation, incorporates new information into an already existing cognitive structure. The second, which he called accommodation, forms a new cognitive structure into which new information can be incorporated.

      The process of assimilation is illustrated in simple problem-solving tasks. Suppose that a child knows how to solve problems that require calculating a percentage of a given number. The child then learns how to solve problems that ask what percentage of a number another number is. The child already has a cognitive structure, or what Piaget called a “schema,” for percentage problems and can incorporate the new knowledge into the existing structure.

      Suppose that the child is then asked to learn how to solve time-rate-distance problems, having never before dealt with this type of problem. This would involve accommodation—the formation of a new cognitive structure. Cognitive development (human behaviour), according to Piaget, represents a dynamic equilibrium between the two processes of assimilation and accommodation.

      As a second part of his theory, Piaget postulated four major periods in individual intellectual development. The first, the sensorimotor period, extends from birth through roughly age two. During this period, a child learns how to modify reflexes to make them more adaptive, to coordinate actions, to retrieve hidden objects, and, eventually, to begin representing information mentally. The second period, known as preoperational, runs approximately from age two to age seven. In this period a child develops language and mental imagery and learns to focus on single perceptual dimensions, such as colour and size. The third, the concrete-operational period, ranges from about age 7 to age 12. During this time a child develops so-called conservation skills, which enable him to recognize that things that may appear to be different are actually the same—that is, that their fundamental properties are “conserved.” For example, suppose that water is poured from a wide short beaker into a tall narrow one. A preoperational child, asked which beaker has more water, will say that the second beaker does (the tall thin one); a concrete-operational child, however, will recognize that the amount of water in the beakers must be the same. Finally, children emerge into the fourth, formal-operational period, which begins at about age 12 and continues throughout life. The formal-operational child develops thinking skills in all logical combinations and learns to think with abstract concepts. For example, a child in the concrete-operational period will have great difficulty determining all the possible orderings of four digits, such as 3-7-5-8. The child who has reached the formal-operational stage, however, will adopt a strategy of systematically varying alternations of digits, starting perhaps with the last digit and working toward the first. This systematic way of thinking is not normally possible for those in the concrete-operational period.

      Piaget's theory had a major impact on the views of intellectual development, but it is not as widely accepted today as it was in the mid-20th century. One shortcoming is that the theory deals primarily with scientific and logical modes of thought, thereby neglecting aesthetic, intuitive, and other modes. In addition, Piaget erred in that children were for the most part capable of performing mental operations earlier than the ages at which he estimated they could perform them.

Post-Piaget theories
      Despite its diminished influence, Piaget's theory continues to serve as a basis for other views. One theory has expanded on Piaget's work by suggesting a possible fifth, adult, period of development, such as “problem finding.” Problem finding comes before problem solving; it is the process of identifying problems that are worth solving in the first place. A second course has identified periods of development that are quite different from those suggested by Piaget. A third course has been to accept the periods of development Piaget proposed but to hold that they have different cognitive bases. Some of the theories in the third group emphasize the importance of memory capacity. For example, it has been shown that children's difficulties in solving transitive inference problems such as

If A is greater than B, B is greater than C, and D is less than C, which is the greatest?
result primarily from memory limitations rather than reasoning limitations (as Piaget had argued). A fourth course has been to focus on the role of knowledge in development. Some investigators argue that much of what has been attributed to reasoning and problem-solving ability in intellectual development is actually better attributed to the extent of the child's knowledge.

The environmental (environment) viewpoint
      The views of intellectual development described above all emphasize the importance of the individual in intellectual development. But an alternative viewpoint emphasizes the importance of the individual's environment, particularly his social environment. This view is related to the cognitive-contextual theories discussed above. Championed originally by the Russian psychologist L.S. Vygotsky, this viewpoint suggests that intellectual development may be largely influenced by a child's interactions with others: a child sees others thinking and acting in certain ways and then internalizes and models what is seen. An elaboration of this view is the suggestion by the Israeli psychologist Reuven Feuerstein that the key to intellectual development is what he called “mediated learning experience.” The parent mediates, or interprets, the environment for the child, and it is largely through this mediation that the child learns to understand and interpret the world.

      The role of environment is particularly evident in studies across cultures. In her research on the cultural contexts of intelligence, Greenfield, while studying indigenous Mayan people, found that the Mayan conception of intelligence is much more collective than the conception of intelligence in European or North American cultures. To the Maya, much of being intelligent involves being able to work with others effectively. In addition, the psychologist Elena Grigorenko and her colleagues, in "The Organization of Luo Conceptions of Intelligence: A Study of Implicit Theories in a Kenyan Village" (2001), found that rural Kenyans have a broad conception of intelligence that emphasizes moral behaviour, particularly duty to others.

      Children who grow up in environments that do not stress Western principles of education may not be able to demonstrate their abilities on conventional Western intelligence tests. Sternberg and others have found that rural Tanzanian children performed much better on skills tests when they were given extended instruction beyond the normal test instructions. Without this additional instruction, however, the children did not always understand what they were supposed to do, and, because of this, they underperformed on the tests. Similarly, a study in Kenya measured children's knowledge of natural remedies used to combat parasites and other common illnesses. Tests for this type of knowledge were combined with conventional Western tests of intelligence and academic achievement. Results showed a negative correlation between practical intelligence (knowledge of medical remedies) and academic achievement. These findings suggested that in some cultures, academic skills may not be particularly valued; as a result, the brighter children invest more effort in acquiring practical skills.

Measuring intelligence
      Almost all of the theories discussed above employ complex tasks for gauging intelligence in both children and adults. Over time, theorists chose particular tasks for analyzing human intelligence, some of which have been explicitly discussed here—e.g., recognition of analogies, classification of similar terms, extrapolation of number series, performance of transitive inferences, and the like. Although the kinds of complex tasks discussed so far belong to a single tradition for the measurement of intelligence, the field actually has two major traditions. The tradition that has been discussed most prominently and has been most influential is that of the French psychologist Alfred Binet (Binet, Alfred) (1857–1911).

 An earlier tradition, and one that still shows some influence upon the field, is that of the English scientist Sir Francis Galton (Galton, Sir Francis). Building on ideas put forth by his uncle Charles Darwin (Darwin, Charles) in On the Origin of Species (1859), Galton believed that human capabilities could be understood through scientific investigation. From 1884 to 1890 Galton maintained a laboratory in London where visitors could have themselves measured on a variety of psychophysical tasks, such as weight discrimination and sensitivity to musical pitch. Galton believed that psychophysical abilities were the basis of intelligence and, hence, that these tests (intelligence test) were measures of intelligence. The earliest formal intelligence tests (intelligence test), therefore, required a person to perform such simple tasks as deciding which of two weights was heavier or showing how forcefully one could squeeze one's hand.

  The Galtonian tradition was taken to the United States by the American psychologist James McKeen Cattell (Cattell, James McKeen). Later, one of Cattell's students, the American anthropologist Clark Wissler (Wissler, Clark), collected data showing that scores on Galtonian types of tasks were not good predictors of grades in college or even of scores on other tasks. Catell nonetheless continued to develop his Galtonian approach in psychometric research and, with Edward Thorndike, helped to establish a centre for mental testing and measurement.

The IQ test
      The more influential tradition of mental testing was developed by Binet (Binet, Alfred) and his collaborator, Theodore Simon, in France. In 1904 the minister of public instruction in Paris named a commission to study or create tests that would ensure that mentally retarded (intellectual disability) children received an adequate education. The minister was also concerned that children of normal intelligence were being placed in classes for mentally retarded children because of behaviour problems. Even before Wissler's research, Binet, who was charged with developing the new test, had flatly rejected the Galtonian tradition, believing that Galton's tests measured trivial abilities. He proposed instead that tests of intelligence should measure skills such as judgment, comprehension, and reasoning—the same kinds of skills measured by most intelligence tests today. Binet's early test was taken to Stanford University by Lewis Terman (Terman, Lewis Madison), whose version came to be called the Stanford-Binet test. This test has been revised frequently and continues to be used in countries all over the world.

      The Stanford-Binet test, and others like it, have yielded at the very least an overall score referred to as an intelligence quotient, or IQ. Some tests, such as the Wechsler Adult Intelligence Scale (Revised) and the Wechsler Intelligence Scale for Children (Revised), yield an overall IQ as well as separate IQs for verbal and performance subtests. An example of a verbal subtest would be vocabulary, whereas an example of a performance subtest would be picture arrangement, the latter requiring an examinee to arrange a set of pictures into a sequence so that they tell a comprehensible story.

      Later developments in intelligence testing expanded the range of abilities tested. For example, in 1997 the psychologists J.P. Das and Jack A. Naglieri published the Cognitive Assessment System, a test based on a theory of intelligence first proposed by the Russian psychologist Alexander Luria. The test measured planning abilities, attentional abilities, and simultaneous and successive processing abilities. Simultaneous processing abilities are used to solve tasks such as figural matrix problems, in which the test taker must fill in a matrix with a missing geometric form. Successive processing abilities are used in tests such as digit span, in which one must repeat back a string of memorized digits.

      IQ was originally computed as the ratio of mental age to chronological (physical) age, multiplied by 100. Thus, if a child of age 10 had a mental age of 12 (that is, performed on the test at the level of an average 12-year-old), the child was assigned an IQ of 12/10 × 100, or 120. If the 10-year-old had a mental age of 8, the child's IQ would be 8/10 × 100, or 80. A score of 100, where the mental age equals the chronological age, is average.

      As discussed above, the concept of mental age has fallen into disrepute. Many tests still yield an IQ, but they are most often computed on the basis of statistical distributions. The scores are assigned on the basis of what percentage of people of a given group would be expected to have a certain IQ. (See psychological testing.)

The distribution of IQ scores
 Intelligence test scores follow an approximately normal distribution, meaning that most people score near the middle of the distribution of scores and that scores drop off fairly rapidly in frequency as one moves in either direction from the centre. For example, on the IQ scale, about 2 out of 3 scores fall between 85 and 115, and about 19 out of 20 scores fall between 70 and 130. Put another way, only 1 out of 20 scores differs from the average IQ (100) by more than 30 points.

      It has been common to attach labels to certain levels of IQ. At the upper end, the label gifted (gifted child) is sometimes assigned to people with IQs of 130 or higher. Scores at the lower end have been given the labels borderline retarded (70 to 84) and severely retarded (25 to 39). All such terms, however, have pitfalls and can be counterproductive. First, their use assumes that conventional intelligence tests provide sufficient information to classify someone as gifted or mentally retarded (intellectual disability), but most authorities would reject this assumption. In fact, the information yielded by conventional intelligence tests represents only a fairly narrow range of abilities. To label someone as mentally retarded solely on the basis of a single test score, therefore, is to risk doing a disservice and an injustice to that person. Most psychologists and other authorities recognize that social as well as strictly intellectual skills must be considered in any classification of mental retardation.

      Second, giftedness is generally recognized as more than just a degree of intelligence, even broadly defined. Most psychologists who have studied gifted persons agree that a variety of aspects make up giftedness. Howard E. Gruber, a Swiss psychologist, and Mihaly Csikszentmihalyi, an American psychologist, were among those who doubted that giftedness in childhood (gifted child) is the sole predictor of adult abilities. Gruber held that giftedness unfolds over the course of a lifetime and involves achievement at least as much as intelligence. Gifted people, he contended, have life plans that they seek to realize, and these plans develop over the course of many years. As was true in the discussion of mental retardation, the concept of giftedness is trivialized if it is understood only in terms of a single test score.

      Third, the significance of a given test score can be different for different people. A certain IQ score may indicate a higher level of intelligence for a person who grew up in poverty and attended an inadequate school than it would for a person who grew up in an upper-middle-class environment and was schooled in a productive learning environment. An IQ score on a test given in English also may indicate a higher level of intelligence for a person whose first language is not English than it would for a native English speaker. Another aspect that affects the significance of test scores is that some people are “test-anxious” and may do poorly on almost any standardized test. Because of these and similar drawbacks, it has come to be believed that scores should be interpreted carefully, on an individual basis.

Heritability and malleability of intelligence
      Intelligence has historically been conceptualized as a more or less fixed trait. Whereas a minority of investigators believe either that it is highly heritable or that it is minimally heritable, most take an intermediate position.

      Among the most fruitful methods that have been used to assess the heritability of intelligence is the study of identical twins who were separated at an early age and reared apart. If the twins were raised in separate environments, and if it is assumed that when twins are separated they are randomly distributed across environments (often a dubious assumption), then the twins would have in common all of their genes but none of their environment, except for chance environmental overlap. As a result, the correlation between their performance on intelligence tests could identify any possible link between test scores and heredity. Another method compares the relationship between intelligence-test scores of identical twins and those of fraternal twins. Because these results are computed on the basis of intelligence-test scores, however, they represent only those aspects of intelligence that are measured by the tests.

      Studies of twins do in fact provide strong evidence for the heritability of intelligence; the scores of identical twins reared apart are highly correlated. In addition, adopted children's scores are highly correlated with their birth parents and not with their adoptive parents. Also significant are findings that heritability can differ between ethnic and racial groups, as well as across time within a single group; that is, the extent to which genes versus environment matter in IQ depends on many factors, including socioeconomic class. Moreover, the psychologist Robert Plomin and others have found that evidence of the heritability of intelligence increases with age; this suggests that, as a person grows older, genetic factors become a more important determinant of intelligence, while environmental factors become less important.

      Whatever the heritability factor of IQ may be, it is a separate issue whether intelligence can be increased. Evidence that it can was provided by the American-born New Zealand political scientist James Flynn, who showed that intelligence test scores around the world rose steadily in the late 20th century. The reasons for the increase are not fully understood, however, and the phenomenon thus requires additional careful investigation. Among many possible causes of the increase, for example, are environmental changes such as the addition of vitamin C to prenatal and postnatal diet and, more generally, the improved nutrition of mothers and infants as compared with earlier in the century. In their book The Bell Curve (1994), Richard Herrnstein and Charles Murray argued that IQ is important for life success and that differences between racial groups in life success can be attributed in part to differences in IQ. They speculated that these differences might be genetic. As noted above, such claims remain speculative (see race: The scientific debate over “race” (race)).

      Despite the general increase in scores, average IQs continue to vary both across countries and across different socioeconomic groups. For example, many researchers have found a positive correlation between socioeconomic status and IQ, although they disagree about the reasons for the relationship. Most investigators also agree that differences in educational opportunities play an important role, though some believe that the main basis of the difference is hereditary. There is no broad agreement about why such differences exist. Most important, it should be noted that these differences are based on IQ alone and not on intelligence as it is more broadly defined. Even less is known about group differences in intelligence as it is broadly defined than is known about differences in IQ. Nevertheless, theories of inherited differences in IQ between racial groups have been found to be without basis. There is more variability within groups than between groups.

      Finally, no matter how heritable intelligence may be, some aspects of it are still malleable. With intervention, even a highly heritable trait can be modified. A program of training in intellectual skills can increase some aspects of a person's intelligence; however, no training program—no environmental condition of any sort—can make a genius of a person with low measured intelligence. But some gains are possible, and programs have been developed for increasing intellectual skills. Intelligence, in the view of many authorities, is not a foregone conclusion the day a person is born. A main trend for psychologists in the intelligence field has been to combine testing and training functions to help people make the most of their intelligence.

Robert J. Sternberg

Additional Reading

General works
Introductions that provide a frame of reference and the terminology necessary for understanding the study of intelligence are found in such comprehensive sources as Howard Gardner, Mindy L. Kornhaber, and Warren K. Wake, Intelligence: Multiple Perspectives (1996); Richard L. Gregory (ed.), The Oxford Companion to the Mind, 2nd ed. (2004); and Raymond J. Corsini and W. Edward Craighead (eds.), Encyclopedia of Psychology and Behavioral Science, 3rd ed., 4 vol. (2001). For current research in the field, Psychology Today provides coverage on a general level. A critique of intelligence studies is provided by both Stephen Jay Gould, The Mismeasure of Man, rev. and expanded ed. (1996); and Carol Tavris, The Mismeasure of Woman (1992). Comprehensive works on intelligence include Ian J. Deary, Intelligence: A Very Short Introduction (2001); and Robert J. Sternberg (ed.), Handbook of Intelligence (2000), and International Handbook of Intelligence (2004).

Theories of intelligence
Early theories are presented in Alfred Binet and Theodore Simon, The Development of Intelligence in Children: The Binet-Simon Scale, trans. from the French by Elizabeth S. Kite (1916, reprinted 1983); and Charles E. Spearman, The Nature of “Intelligence” and the Principles of Cognition (1923, reprinted 1973), and The Abilities of Man: Their Nature and Measurement (1932, reprinted 1970). Later theories are addressed in Howard Gardner, Frames of Mind: The Theory of Multiple Intelligences, 2nd ed. (1993, reissued 2004), and Intelligence Reframed: Multiple Intelligences for the 21st Century (1999); John B. Carroll, Human Cognitive Abilities: A Survey of Factor-Analytic Studies (1993); Raymond B. Cattell, Intelligence: Its Structure, Growth, and Action (1987; originally published as Abilities: Their Structure, Growth, and Action [1971]); Robert J. Sternberg, The Triarchic Mind: A New Theory of Human Intelligence (1988); Michael Cole and Barbara Means, Comparative Studies of How People Think: An Introduction (1981); Hans J. Eysenck, Intelligence: A New Look (1998, reissued 2000); and Oliver Wilhelm and Randall W. Engle (eds.), Handbook of Understanding and Measuring Intelligence (2005).

Development of intelligence
A definitive summary of Piaget's early work is presented in John H. Flavell, The Developmental Psychology of Jean Piaget (1963). Studies by Piaget on the mechanisms of intellectual development and fundamental cognitive processes include Jean Piaget, The Psychology of Intelligence, trans. by Malcolm Piercy and D.E. Berlyne (1950, reissued 2001; originally published in French, 1947); and Howard E. Gruber and J. Jacques Vonèche (eds.), The Essential Piaget (1977, reissued 1995). L.S. Vygotsky, Mind in Society: The Development of Higher Psychological Processes, ed. by Michael Cole et al. (1978), examines the cognitive-contextual theory of intellectual development.

Measuring intelligence
Early approaches to evaluating intelligence are presented in Francis Galton, Hereditary Genius: An Inquiry into Its Laws and Consequences (1869, reprinted 1998), and Inquiries into Human Faculty and Its Development (1883, reprinted 1998). American investigations at the beginning of the 20th century are discussed in Edward L. Thorndike et al., The Measurement of Intelligence (1927, reprinted 1973). Developments and applications of Binet's tradition of mental testing are described in Lewis M. Terman, The Measurement of Intelligence: An Explanation of and a Complete Guide for the Use of the Stanford Revision and Extension of the Binet-Simon Intelligence Scale (1916, reprinted 1975). Discussion of later research in the field is available in Philip E. Vernon, The Measurement of Abilities, 2nd ed. (1956, reissued 1972); Anne Anastasi and Susana Urbina, Psychological Testing, 7th ed. (1997); Robert J. Sternberg and Jean E. Pretz (eds.), Cognition and Intelligence: Identifying the Mechanisms of the Mind (2005); and Robert J. Sternberg and David D. Preiss (eds.), Intelligence and Technology: The Impact of Tools on the Nature and Levels of Human Ability (2005).Robert J. Sternberg

* * *


Universalium. 2010.

Игры ⚽ Нужно решить контрольную?

Look at other dictionaries:

  • Intelligence quotient — IQ redirects here. For other uses, see IQ (disambiguation). Intelligence quotient Diagnostics An example of one kind of IQ test item, modeled after items in the Raven s Progressive Matrices test …   Wikipedia

  • human behaviour — Introduction       the potential and expressed capacity for physical, mental, and social activity during the phases of human life.       Human beings, like other animal species, have a typical life course that consists of successive phases of… …   Universalium

  • human evolution — Evolution of modern human beings from nonhuman and extinct hominid forms. Genetic evidence points to an evolutionary divergence between the lineages of humans and the great apes (Pongidae) on the African continent 5–8 million years ago. The… …   Universalium

  • Human height — Tall redirects here. For other uses, see Tall (disambiguation). Human height is the distance from the bottom of the feet to the top of the head in a human body standing erect. When populations share genetic background and environmental factors,… …   Wikipedia

  • Intelligence and Security Command — Wappen des INSCOM Das United States Army Intelligence and Security Command (INSCOM) ist das nachrichtendienstliche und Sicherheits Hauptkommando der US Army. INSCOM ist zugleich Teil der National Security Agency/Central Security Service NSA/CSS.… …   Deutsch Wikipedia

  • Human-based genetic algorithm — In evolutionary computation, a human based genetic algorithm (HBGA) is a genetic algorithm that allows humans to contribute innovative solutions to the evolutionary process. For this purpose, an HBGA has human interfaces for initialization,… …   Wikipedia

  • human being — 1. any individual of the genus Homo, esp. a member of the species Homo sapiens. 2. a person, esp. as distinguished from other animals or as representing the human species: living conditions not fit for human beings; a very generous human being.… …   Universalium

  • intelligence — 1. The product resulting from the collection, processing, integration, analysis, evaluation, and interpretation of available information concerning foreign countries or areas. 2. Information and knowledge about an adversary obtained through… …   Military dictionary

  • Intelligence collection management — is the process of managing and organizing the collection of intelligence information from various sources. The collection department of an intelligence organization may attempt basic validation of that which it collects, but is not intended to… …   Wikipedia

  • Human genetic engineering — is the genetic engineering of humans by modifying the genotype of the unborn individual to control what traits it will possess when born. [Citation last = Singer first = Peter author link = last2 = Kuhese first2 = Helga author2 link = title =… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”