web analytics
ClearWater Blog

Tag: Intellect

What’s Wrong with Ability Tests

by Dr Tuvia Melamed on Feb.02, 2010, under Short Blogs

What’s Wrong with Ability Tests

 
 Dr Tuvia Melamed
ClearWater A&D
to download a pdf copy click here

 

One of the main disappointments in our profession is the current state of commercial Ability tests. After nearly 100 years since the first intrduction of ‘scientific’ measurements of intellectual capacity, you would have expected something much better than what we currently have.

I do not challenge the available ability tests in terms of their technical capacity – i.e., my criticism is not to do with questions of quality of normative data, nor it is about the validity and reliability of the measures. I take these for granted. What has been frustrating me for the last 25 years of my practice as a business psychologist is the limited output we gain from an ability test.

Most tests requires participants to spend close to an hour responding to demanding test items; yet, the final outcome in most cases does not amount to more than single figure – normally a percentile score comparing the performance of the test taker to that of a norm group (e.g., ‘you score above 63% of the norm group). This is a far cry from the rich output and interpretations you get from a personality inventory. No wonder that the British Psychological Society differentiates between Level A certificate of competence in occupational testing (ability tests) and Level B certificate that is dedicated to personality instruments. Relatively to personality instruments – there is not much to say about ability tests outcomes.

Do not get me wrong – I am not challenging the value of ability tests, and I am not suggesting that they should be replaced by ‘more-value-for-money/effort’ personality instruments. What I am opening to discussion is the frustrating state of affairs of limited output we get from ability tests.

The closest attempt to enrich the output of an ability test that I am aware of is the work of Robert Hogan with the HBRI (Hogan Business Reasoning Inventory). The test provides two scores – Strategic Reasoning and Tactical Reasoning. These are combined to create an overall critical reasoning score. Hogan provides an interpretation of each of the two individual scores (e.g., what are the implications of scoring 78% on Strategic Reasoning) as well as a very simple 2×2 classification into a 4-type typology based on the interaction between the two scores. As such, the report is far more informative than the typical practice of providing a single score. Yet, there is so much more that can be obtained from the available data.

The followings are some possible ways to utilise ability tests better:

  1. Wrong versus poor answers: The raw score on an ability test is made of the number of correct answers. An answer to a question item can be either ‘Right’ or ‘Wrong’. Yet, the multiple choice approach used with the traditional psychometric approach does not differentiate between a wrong answer that is by far the worst option from a wrong answer that is closest to the correct answer. Consider the following simple example. A test item might be: ‘Glove’ is to ‘Hand’ as ‘Sock’ is to ____ (a) Shoe; (b) Arm; (c) Leg; (d) Foot; (e)Cupboard. Although only answer ‘d’ is correct, some of the answers are closer to it than others. Choosing answer ‘e’ indicates less ability than choosing answer ‘c’. Yet, the dichotomous scoring system of tests will view both answers as equal.
  2. Time to complete test: Most ability tests are time limited. Two candidates that take the same test and get the same number of correct answers will be considered as equal. Yet, if one took only 20 of the 40 minutes to complete the test; whereas the other took 40 minutes, there is obviously some difference in the ability of these candidates.
  3. Percent of correct answers:  Two candidates that took the same test and got the same number of correct answers will be considered equal. Yet, one may have attempted only 60% of the test items, where the other completed all items. There is a difference in the ability of the two candidates, but this is ignored when interpreting the results.
  4. Level of complexity: The work of Elliot Jaques suggested 8 levels of complexity of mental processes that are hierarchical, yet conceptually different from one another. Yet, most tests do not differentiate among levels of difficulty or complexity, and cluster all items together for scoring purposes. Thus, the interpretation of someone who scores highly on divergent thinking items and poorly on convergent thinking items should be different to someone who had a different pattern of scores. Yet, under the single score system; if the total number of correct answers is the same; both candidates will be viewed as equal.
  5. Interaction among sub-scores: Some tests provide sub-scores based on content (different to complexity and difficulty from the previous point). E.g., we get a breakdown of verbal reasoning, numerical reasoning, and abstract reasoning;  or a breakdown of strategic reasoning and tactical reasoning. Most test publishers leave it at that. Hardly any considers the interactions among the sub-scores. This is very different in personality assessment where the ability to interpret the interaction between two factors is considered far more valuable that the interpretation of the simple main effect of each of the two factors.

So, what I am looking for is an ability test that will provide me with richer information than simply telling me how the individual scored in comparison the norm group.

This frustration, led me to spend the last three years developing an ability test that utilises these ideas and provides rich and valuable output.  I called the final test Intellecto ©. You can find more about it on by clicking on the image to the left. To see an example of the output from it click here.






Leave a Comment :, , more...

Beyond IQ

by Dr Tuvia Melamed on Jan.26, 2010, under Short Blogs

Advances in Assessment of Intellect

 
 Dr Tuvia Melamed
ClearWater A&D
  
to download a pdf copy click here

 The traditional view of intelligence and its applications are challenged by a new model based on the capacity to handle complexity. The paper explores the validity of the 8-layer complexity model. While traditional measures of intelligence explain success up to technical-expert level and early management levels, they bear hardly any relationships with success at higher organisational levels. The complexity model was correlated far strongly with success across the all organisational levels

 Introduction

The mainstream study and application of assessment of intellectual capability has not progressed much since the pioneering work of Wechsler and Galton. An examination of early books on human intellect (Board of Education, 1924; Vernon, 1938; Welton, 1891), suggests that many of the ideas and their applications are still relevant and current in the 21st Century. More specifically, the concept of IQ and the components of human intelligence developed a century ago are still prominent in the field of assessment of intellectual capability.

Test publishers managed to package test items in far more appealing and  modernised ways, they have significantly improve presentation and language used, they introduce far more interactive IT driven test items, and they might as well used different terms instead of IQ (e.g., critical analysis and the like). However, conceptually, there is not much difference in what early ability tests measured and what contemporary ability tests measure nowadays. The application of psychometric instruments used in the 1st world war to select British Pilots, seems applicable to current requirements.

This paper explores a preliminary work using a different model of assessing human capability. The model uses the work of Elliot Jaques (1989) as a starting point, but progresses into different routes. It departs from the traditional model of the G factor, and the content-specific sub-sets that distinguish between various forms of reasoning (e.g., verbal, numerical, abstract, or mechanical). And looks at more eclectic approach, examining the ability to understand, and manage complexity. It shifts the emphasis from what can be called ‘Academic Intelligence’ (IQ) into the more business environment of ‘Executive Intelligence’ and complexity of business decision making. It departs from the view of evolutionary nature of human capability and suggests discontinuous and distinct steps in human capability.

More specifically, the proposed model suggests that there are 8 discontinuous and distinct layers of potential capability and that people develop by discontinuous periodic jumps (rather than linearly) from one complexity state to the next. The 8 layers are universal and cuts across organisations, industries, and nations. The ability to handle complexity is not static. It matures with age in a predictable manner. For applied purposes, the level of work complexity should be in line with the person’s potential capability. There are 8 parallel level of complexity in organisational roles that correspond with complexity of mental processes (see Box #1).

BOX #1 The 8 levels of the Complexity Model:

  1. Retrieval Thinking – Best Practice. Operating in a structured methodical way, covering all the relevant information, and interpreting the information correctly.
  2. Affirmative Thinking – Pattern Recognition. Creating a potential logical explanation or solution, based on a series of independent pieces of data. It is about bridging gaps in information, by seeking linkages between independent pieces of information.
  3. Convergent Thinking – Critical Analysis. Examining and evaluating given hypotheses. Using systematic reasoning to identify a correct answer from a series of available options. It involves verification of hypotheses. It equates what in everyday language is referred to as pure intellect.
  4. Divergent Thinking – Creativity. It involves the opening up or creation of new hypotheses, using induction – exploring the mass of information to identify possible trends and patterns – Thinking of many original, diverse, and elaborate ideas. Taking separate elements and blending them into something completely new and original. Combining previously unconnected ideas, information and elements to create something new.
  5. System Thinking – Innovation. It involves the application of creative ideas into the wider system and ensuring that these are viable, by exploring elements as a whole (or holistically) including the various types of relationships between the many elements in a complex system. Turning new ideas into practical reality, by understanding of the system in which the creative input will be placed, and their immediate and long-term implications.
  6. Transformation Thinking – Reframing. It involves visioning a new future, and repositioning of systems of information in a new perspective. It gives a new life and meaning to well-established structures, arrangements, and systems, by turning these on their head or viewing them from a completely different perspective, that enable them to achieve future vision.
  7. Reconstructive Thinking – Re-forming. It involves dismantling complex settings and re-building as something quite different. It takes the reframed reality, breaks it to pieces and re-models it to be fit for purpose. It involves inducing and deducing global information systems to solution routes.
  8. New World Thinking – Revolution. The creation of a new body of knowledge that takes our current understanding of the world to a new level. It involves challenging the most fundamental building blocks of our reality, and replacing them with something rather different, new, and unexpected. It replaces old world with a new one, a world that is significantly different (never the same) from what was known before.

 One of the key challenges that the complexity model poses to the traditional view of human intelligence is that it focuses only on Convergent Thinking – Pure logic and systematic verification of hypotheses. It is about narrowing down options to a single logical and correct answer. This is typifies in the common way of assessing Convergent Thinking – i.e., seeking a single correct answer from 4 possible options (multiple-choice method). The complexity model recognises the importance of Convergent Thinking, but views it as only level 3 on the 8-level model. Consequently, it suggests that Convergent Thinking will explain success and high performance in roles that require and heavily reliant on Convergent Thinking – i.e., technical-expert roles or lower management roles. For higher level roles, convergent thinking will have importance, but other levels of thinking or handling complexity become more important. The higher the hierarchical role, the less importance is placed on Convergent Thinking.

 A secondary challenge explores the effect of age. While traditional measures of intelligence are known to have a negative correlation with age; the complexity model explicitly suggests that the ability to handle complexity improves with age. Thus, the pure use of traditional measures of intelligence will show bias in favour of young people. Consequently, a hiring recommendation based on traditional measures of ability will favour young candidates. This seems counter-intuitive as it neglects the knowledge and experience that more mature candidates bring. The complexity model accounts for knowledge and experience. 

Objectives

Explore the validity of the complexity model and contrast it with the validity of traditional measures of human intelligence in explaining success and high performance in a business context.

 Hypothesis #1: While traditional measures of human intelligence will explain success up to a technical-expert level, their ability will decrease linearly from that point as a function of role complexity. Measures of complexity, on the other hand, will explain success across all hierarchical levels.

 Hypothesis #2: The predictive validity of traditional measures of intelligence is impaired by the effect of age; whereas the predictive validity of complexity measures is not negatively affected by age.

Design

The study is of 70 managers from 5 different organisations.. It used regression and correlation approach to explore and contrast the effect of traditional versus complexity measures of intelligence in explaining success and performance in a business environment.

Method

Sample: The sample included 70 participants who took part in a talent identification and management programmes. Participants were from 5 different organisations.

 Predictors: (a) 2 Traditional measures of intellect (verbal and numerical) – Watson and Glaser Critical Thinking Analysis and Rust Advanced Numerical Reasoning Appraisal (RANRA). (b) 2 sets of measures of handling complexity. First, complexity profile – scores on each of the 8 levels of the complexity model; second, a single aggregate score base don the 8 scores.

 Criterion: The hierarchical level of the participant, using the 8-level organisational role complexity scale.

 Control Variables: Age and the 28 personality measures derived from the Hogan instruments (HPI, MVPI, and HDS).

Results

Table #1 displays a correlation matrix amongst the study variables. Further analysis controlled for the effect of age through partial correlations. The results suggested that the Complexity Score correlated strongly with the success criteria (r = .80, p< .0001). The traditional intelligence measure (verbal and numerical reasoning) were barely significant (r = .27 and .29), and just below significance level when controlled for age (r = .23 and .24). As expected, the traditional intelligence measure correlated strongly with level 3 of the complexity model – namely ‘Convergent Thinking’ (r =.80 and .75). These correlations with higher thinking levels dropped in a relatively linear fashion.

A stepwise regression analysis explained 68% of the criterion’s variance, R² =.678, F(2,69) = 69.9, p < .0001. Only 2 predictors entered the regression equation, namely – Complexity Score (b = .52) and Level 5: System Thinking (b = .31).

 TABLE#1: Correlations and Partial Correlations Matrix for key variables (n = 70)

  Bivariate   Partial: control for age
  1 2 3 4   1 2 3 4
Criteria                  
1. Role Level                  
Predictors                  
2. Verbal Reasoning .27         .23      
3. Numerical Reasoning .29 .87       .24 .85    
4. Complexity Score .80 .29 .26     .80 .27 .24  
Complexity Levels                  
5a. #1 Retrieval Thinking .29 .44 .42 .23   .26 .40 .37 .21
5b. #2 Affirmative Thinking .50 .59 .61 .52   .28 .56 .59 .51
5c. #3 Convergent Thinking .41 .80 .75 .36   .36 .78 .70 .35
5d, #4 Divergent Thinking .66 .30 .24 .84   .68 .33 .27 .85
5e. #5 System Thinking .76 .29 .32 .81   .76 .25 .27 .81
5f. #6 Transformation Thinking .58 .19 .21 .73   .58 .18 .21 .73
5g. #7 Reconstructive Thinking .49 .18 .19 .63   .49 .18 .20 .64
5h. #8 New World Thinking *  

r ≥ . 25, p ≤ .05; r ≥ . 30, p ≤ .01; r ≥ . 38, p ≤ .001

* No variance was found for Level 8 Thinking (New World Thinking)

Conclusions

The findings demonstrated that while traditional measure of intelligence are repeatedly quoted as the single best measure of performance, their effect is somewhat limited to certain hierarchical levels within organisations. The complexity model seems to provide a better framework to explain high performance and success, as it is designed and cover the full range of hierarchical levels.

References

  • Board of Education (1924). Psychological tests of Educable Capacity. H.M. Stationary Office. London
  • Jaques, E. (1989). Requisite Organisation. Cason Hall, Arlington, VA.
  • Vernon, P. E. (1938). Assessment of Psychological Qualities by Verbal Methods. H.M. Stationary Office. London
  • Welton, J. (1891). A Manual of Logic. University Correspondence College Press Warehouse. Strand, W.C.






Leave a Comment :, , more...

Get Adobe Flash playerPlugin by wpburn.com wordpress themes

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Tweet Blender

minosiantsminosiants: The Myers-Briggs Type Indicator (MBTI) assessment is a psychometric questionnaire designed to measure psychological... http://ff.im/jAp1E
7 years ago
bubbabelbubbabel: Worn out from psychometric test ...
7 years ago
goldinyoureyesgoldinyoureyes: i've been studying my butt off for psychometric methods & i'm still screwed....i just have to get a B, oh gosh...fingers are crossed i will!
7 years ago
DavisPTnetworkDavisPTnetwork: Latest PT News: Psychometric Properties of a Peer-Assessment Program to Assess Continuing Competence in Physical T... http://bit.ly/9YjkvW
7 years ago

Archives

All entries, chronologically...