web analytics
ClearWater Blog

Tag: Assessment

Pelé or Maradona?

by Dr Tuvia Melamed on Feb.12, 2010, under Short Blogs

Pelé or Maradona?


 Dr Tuvia Melamed
ClearWater A&D
to download a pdf copy click here





Pelé or Maradona? — Probably the biggest ever debate among football lovers. The first led teams of ‘magicians’ to three World cup wins, equated the name of Brazil with football and established Brazil as the greatest ever football nation. The other won the 1986 world cup single-handed (literally, some might say). This blog builds on this timeless comparison to explore common issues in assessment for high level roles.


Pelé or Maradona? — Probably the biggest ever debate among football lovers. The first led teams of ‘magicians’ to 3 World cup wins, equated the name of Brazil with football and established Brazil as the greatest ever football nation. The other won the 1986 world cup single-handed (literally, some might say). While all football fans refer to Brazil 1970 as the greatest ever football team with the likes of Tostao, Rivalino, Jarjinio, Gerson, and Carlos Alberto complementing the brilliance of Pelé; hardly anybody can name any Argentinean player who played alongside Maradona in 1986.

So who is greater, the team player who elevated a team to become the best team ever, or the individualistic genius, who demolished any opposition in his way with only a limited support of a mediocre and unmemorable team mates.

Pelé or Maradona?

If you are not sure, play the YouTube videos to see them in action


Difficult questions, but the difficulty may arise because these are the wrong questions to ask. Maybe we should ask: Can we restructure our operations to accommodate both candidates? or What set of skills would complement best the current capability we have in our business? Who is better, doesn’t help. They are both good. They will both deliver results.

Would Pelé of 1970 could have won the 1986 world cup with the mediocre Argentinean team? Would the individualistic capabilities of Maradona have complemented the magic of 1970 Brazil? We could only speculate. Nevertheless, the talent questions we focus on in business are always contextualized. We have seen too often business ‘A’ players, turning mediocre or becoming a liability once moved from one environment to another.

To avoid these issues, at ClearWater we have devised a methodology that helps to differentiate between true ‘A’ players, and one-hit wonders. Between those who are guaranteed to succeed wherever they are placed and those whose success is a case of being ‘the right person, in the right time’. We call it, ‘Mastering Complexity’ -- Click here to learn more.

Comments Off : more...

What’s Wrong with Ability Tests

by Dr Tuvia Melamed on Feb.02, 2010, under Short Blogs

What’s Wrong with Ability Tests

 Dr Tuvia Melamed
ClearWater A&D
to download a pdf copy click here


One of the main disappointments in our profession is the current state of commercial Ability tests. After nearly 100 years since the first intrduction of ‘scientific’ measurements of intellectual capacity, you would have expected something much better than what we currently have.

I do not challenge the available ability tests in terms of their technical capacity – i.e., my criticism is not to do with questions of quality of normative data, nor it is about the validity and reliability of the measures. I take these for granted. What has been frustrating me for the last 25 years of my practice as a business psychologist is the limited output we gain from an ability test.

Most tests requires participants to spend close to an hour responding to demanding test items; yet, the final outcome in most cases does not amount to more than single figure – normally a percentile score comparing the performance of the test taker to that of a norm group (e.g., ‘you score above 63% of the norm group). This is a far cry from the rich output and interpretations you get from a personality inventory. No wonder that the British Psychological Society differentiates between Level A certificate of competence in occupational testing (ability tests) and Level B certificate that is dedicated to personality instruments. Relatively to personality instruments – there is not much to say about ability tests outcomes.

Do not get me wrong – I am not challenging the value of ability tests, and I am not suggesting that they should be replaced by ‘more-value-for-money/effort’ personality instruments. What I am opening to discussion is the frustrating state of affairs of limited output we get from ability tests.

The closest attempt to enrich the output of an ability test that I am aware of is the work of Robert Hogan with the HBRI (Hogan Business Reasoning Inventory). The test provides two scores – Strategic Reasoning and Tactical Reasoning. These are combined to create an overall critical reasoning score. Hogan provides an interpretation of each of the two individual scores (e.g., what are the implications of scoring 78% on Strategic Reasoning) as well as a very simple 2×2 classification into a 4-type typology based on the interaction between the two scores. As such, the report is far more informative than the typical practice of providing a single score. Yet, there is so much more that can be obtained from the available data.

The followings are some possible ways to utilise ability tests better:

  1. Wrong versus poor answers: The raw score on an ability test is made of the number of correct answers. An answer to a question item can be either ‘Right’ or ‘Wrong’. Yet, the multiple choice approach used with the traditional psychometric approach does not differentiate between a wrong answer that is by far the worst option from a wrong answer that is closest to the correct answer. Consider the following simple example. A test item might be: ‘Glove’ is to ‘Hand’ as ‘Sock’ is to ____ (a) Shoe; (b) Arm; (c) Leg; (d) Foot; (e)Cupboard. Although only answer ‘d’ is correct, some of the answers are closer to it than others. Choosing answer ‘e’ indicates less ability than choosing answer ‘c’. Yet, the dichotomous scoring system of tests will view both answers as equal.
  2. Time to complete test: Most ability tests are time limited. Two candidates that take the same test and get the same number of correct answers will be considered as equal. Yet, if one took only 20 of the 40 minutes to complete the test; whereas the other took 40 minutes, there is obviously some difference in the ability of these candidates.
  3. Percent of correct answers:  Two candidates that took the same test and got the same number of correct answers will be considered equal. Yet, one may have attempted only 60% of the test items, where the other completed all items. There is a difference in the ability of the two candidates, but this is ignored when interpreting the results.
  4. Level of complexity: The work of Elliot Jaques suggested 8 levels of complexity of mental processes that are hierarchical, yet conceptually different from one another. Yet, most tests do not differentiate among levels of difficulty or complexity, and cluster all items together for scoring purposes. Thus, the interpretation of someone who scores highly on divergent thinking items and poorly on convergent thinking items should be different to someone who had a different pattern of scores. Yet, under the single score system; if the total number of correct answers is the same; both candidates will be viewed as equal.
  5. Interaction among sub-scores: Some tests provide sub-scores based on content (different to complexity and difficulty from the previous point). E.g., we get a breakdown of verbal reasoning, numerical reasoning, and abstract reasoning;  or a breakdown of strategic reasoning and tactical reasoning. Most test publishers leave it at that. Hardly any considers the interactions among the sub-scores. This is very different in personality assessment where the ability to interpret the interaction between two factors is considered far more valuable that the interpretation of the simple main effect of each of the two factors.

So, what I am looking for is an ability test that will provide me with richer information than simply telling me how the individual scored in comparison the norm group.

This frustration, led me to spend the last three years developing an ability test that utilises these ideas and provides rich and valuable output.  I called the final test Intellecto ©. You can find more about it on by clicking on the image to the left. To see an example of the output from it click here.

Leave a Comment :, , more...

Beyond IQ

by Dr Tuvia Melamed on Jan.26, 2010, under Short Blogs

Advances in Assessment of Intellect

 Dr Tuvia Melamed
ClearWater A&D
to download a pdf copy click here

 The traditional view of intelligence and its applications are challenged by a new model based on the capacity to handle complexity. The paper explores the validity of the 8-layer complexity model. While traditional measures of intelligence explain success up to technical-expert level and early management levels, they bear hardly any relationships with success at higher organisational levels. The complexity model was correlated far strongly with success across the all organisational levels


The mainstream study and application of assessment of intellectual capability has not progressed much since the pioneering work of Wechsler and Galton. An examination of early books on human intellect (Board of Education, 1924; Vernon, 1938; Welton, 1891), suggests that many of the ideas and their applications are still relevant and current in the 21st Century. More specifically, the concept of IQ and the components of human intelligence developed a century ago are still prominent in the field of assessment of intellectual capability.

Test publishers managed to package test items in far more appealing and  modernised ways, they have significantly improve presentation and language used, they introduce far more interactive IT driven test items, and they might as well used different terms instead of IQ (e.g., critical analysis and the like). However, conceptually, there is not much difference in what early ability tests measured and what contemporary ability tests measure nowadays. The application of psychometric instruments used in the 1st world war to select British Pilots, seems applicable to current requirements.

This paper explores a preliminary work using a different model of assessing human capability. The model uses the work of Elliot Jaques (1989) as a starting point, but progresses into different routes. It departs from the traditional model of the G factor, and the content-specific sub-sets that distinguish between various forms of reasoning (e.g., verbal, numerical, abstract, or mechanical). And looks at more eclectic approach, examining the ability to understand, and manage complexity. It shifts the emphasis from what can be called ‘Academic Intelligence’ (IQ) into the more business environment of ‘Executive Intelligence’ and complexity of business decision making. It departs from the view of evolutionary nature of human capability and suggests discontinuous and distinct steps in human capability.

More specifically, the proposed model suggests that there are 8 discontinuous and distinct layers of potential capability and that people develop by discontinuous periodic jumps (rather than linearly) from one complexity state to the next. The 8 layers are universal and cuts across organisations, industries, and nations. The ability to handle complexity is not static. It matures with age in a predictable manner. For applied purposes, the level of work complexity should be in line with the person’s potential capability. There are 8 parallel level of complexity in organisational roles that correspond with complexity of mental processes (see Box #1).

BOX #1 The 8 levels of the Complexity Model:

  1. Retrieval Thinking – Best Practice. Operating in a structured methodical way, covering all the relevant information, and interpreting the information correctly.
  2. Affirmative Thinking – Pattern Recognition. Creating a potential logical explanation or solution, based on a series of independent pieces of data. It is about bridging gaps in information, by seeking linkages between independent pieces of information.
  3. Convergent Thinking – Critical Analysis. Examining and evaluating given hypotheses. Using systematic reasoning to identify a correct answer from a series of available options. It involves verification of hypotheses. It equates what in everyday language is referred to as pure intellect.
  4. Divergent Thinking – Creativity. It involves the opening up or creation of new hypotheses, using induction – exploring the mass of information to identify possible trends and patterns – Thinking of many original, diverse, and elaborate ideas. Taking separate elements and blending them into something completely new and original. Combining previously unconnected ideas, information and elements to create something new.
  5. System Thinking – Innovation. It involves the application of creative ideas into the wider system and ensuring that these are viable, by exploring elements as a whole (or holistically) including the various types of relationships between the many elements in a complex system. Turning new ideas into practical reality, by understanding of the system in which the creative input will be placed, and their immediate and long-term implications.
  6. Transformation Thinking – Reframing. It involves visioning a new future, and repositioning of systems of information in a new perspective. It gives a new life and meaning to well-established structures, arrangements, and systems, by turning these on their head or viewing them from a completely different perspective, that enable them to achieve future vision.
  7. Reconstructive Thinking – Re-forming. It involves dismantling complex settings and re-building as something quite different. It takes the reframed reality, breaks it to pieces and re-models it to be fit for purpose. It involves inducing and deducing global information systems to solution routes.
  8. New World Thinking – Revolution. The creation of a new body of knowledge that takes our current understanding of the world to a new level. It involves challenging the most fundamental building blocks of our reality, and replacing them with something rather different, new, and unexpected. It replaces old world with a new one, a world that is significantly different (never the same) from what was known before.

 One of the key challenges that the complexity model poses to the traditional view of human intelligence is that it focuses only on Convergent Thinking – Pure logic and systematic verification of hypotheses. It is about narrowing down options to a single logical and correct answer. This is typifies in the common way of assessing Convergent Thinking – i.e., seeking a single correct answer from 4 possible options (multiple-choice method). The complexity model recognises the importance of Convergent Thinking, but views it as only level 3 on the 8-level model. Consequently, it suggests that Convergent Thinking will explain success and high performance in roles that require and heavily reliant on Convergent Thinking – i.e., technical-expert roles or lower management roles. For higher level roles, convergent thinking will have importance, but other levels of thinking or handling complexity become more important. The higher the hierarchical role, the less importance is placed on Convergent Thinking.

 A secondary challenge explores the effect of age. While traditional measures of intelligence are known to have a negative correlation with age; the complexity model explicitly suggests that the ability to handle complexity improves with age. Thus, the pure use of traditional measures of intelligence will show bias in favour of young people. Consequently, a hiring recommendation based on traditional measures of ability will favour young candidates. This seems counter-intuitive as it neglects the knowledge and experience that more mature candidates bring. The complexity model accounts for knowledge and experience. 


Explore the validity of the complexity model and contrast it with the validity of traditional measures of human intelligence in explaining success and high performance in a business context.

 Hypothesis #1: While traditional measures of human intelligence will explain success up to a technical-expert level, their ability will decrease linearly from that point as a function of role complexity. Measures of complexity, on the other hand, will explain success across all hierarchical levels.

 Hypothesis #2: The predictive validity of traditional measures of intelligence is impaired by the effect of age; whereas the predictive validity of complexity measures is not negatively affected by age.


The study is of 70 managers from 5 different organisations.. It used regression and correlation approach to explore and contrast the effect of traditional versus complexity measures of intelligence in explaining success and performance in a business environment.


Sample: The sample included 70 participants who took part in a talent identification and management programmes. Participants were from 5 different organisations.

 Predictors: (a) 2 Traditional measures of intellect (verbal and numerical) – Watson and Glaser Critical Thinking Analysis and Rust Advanced Numerical Reasoning Appraisal (RANRA). (b) 2 sets of measures of handling complexity. First, complexity profile – scores on each of the 8 levels of the complexity model; second, a single aggregate score base don the 8 scores.

 Criterion: The hierarchical level of the participant, using the 8-level organisational role complexity scale.

 Control Variables: Age and the 28 personality measures derived from the Hogan instruments (HPI, MVPI, and HDS).


Table #1 displays a correlation matrix amongst the study variables. Further analysis controlled for the effect of age through partial correlations. The results suggested that the Complexity Score correlated strongly with the success criteria (r = .80, p< .0001). The traditional intelligence measure (verbal and numerical reasoning) were barely significant (r = .27 and .29), and just below significance level when controlled for age (r = .23 and .24). As expected, the traditional intelligence measure correlated strongly with level 3 of the complexity model – namely ‘Convergent Thinking’ (r =.80 and .75). These correlations with higher thinking levels dropped in a relatively linear fashion.

A stepwise regression analysis explained 68% of the criterion’s variance, R² =.678, F(2,69) = 69.9, p < .0001. Only 2 predictors entered the regression equation, namely – Complexity Score (b = .52) and Level 5: System Thinking (b = .31).

 TABLE#1: Correlations and Partial Correlations Matrix for key variables (n = 70)

  Bivariate   Partial: control for age
  1 2 3 4   1 2 3 4
1. Role Level                  
2. Verbal Reasoning .27         .23      
3. Numerical Reasoning .29 .87       .24 .85    
4. Complexity Score .80 .29 .26     .80 .27 .24  
Complexity Levels                  
5a. #1 Retrieval Thinking .29 .44 .42 .23   .26 .40 .37 .21
5b. #2 Affirmative Thinking .50 .59 .61 .52   .28 .56 .59 .51
5c. #3 Convergent Thinking .41 .80 .75 .36   .36 .78 .70 .35
5d, #4 Divergent Thinking .66 .30 .24 .84   .68 .33 .27 .85
5e. #5 System Thinking .76 .29 .32 .81   .76 .25 .27 .81
5f. #6 Transformation Thinking .58 .19 .21 .73   .58 .18 .21 .73
5g. #7 Reconstructive Thinking .49 .18 .19 .63   .49 .18 .20 .64
5h. #8 New World Thinking *  

r ≥ . 25, p ≤ .05; r ≥ . 30, p ≤ .01; r ≥ . 38, p ≤ .001

* No variance was found for Level 8 Thinking (New World Thinking)


The findings demonstrated that while traditional measure of intelligence are repeatedly quoted as the single best measure of performance, their effect is somewhat limited to certain hierarchical levels within organisations. The complexity model seems to provide a better framework to explain high performance and success, as it is designed and cover the full range of hierarchical levels.


  • Board of Education (1924). Psychological tests of Educable Capacity. H.M. Stationary Office. London
  • Jaques, E. (1989). Requisite Organisation. Cason Hall, Arlington, VA.
  • Vernon, P. E. (1938). Assessment of Psychological Qualities by Verbal Methods. H.M. Stationary Office. London
  • Welton, J. (1891). A Manual of Logic. University Correspondence College Press Warehouse. Strand, W.C.

Leave a Comment :, , more...

Uncommon Sense

by Dr Tuvia Melamed on Jan.25, 2010, under Articles

Departure from Best Practice

 Dr Tuvia Melamed
ClearWater A&D
to download a pdf copy click here


 This paper marks a departure from contemporary thinking in employee resourcing. It rethinks the approach used for selecting employees, questions common sense, and dares to offer an approach that goes against perceived best practice. At the heart of it are four radical elements (see Table 1). 

TABLE #1: Uncommon Sense Principles in Resourcing

  Common Sense Uncommon Sense
Sifting Methodology  Reduce high volume of applicants to a manageable number Keep as many candidates as possible in the system for as long as possible
Success Criterion  Use a competency framework. Mark candidates against an OVERALL score across all competencies A variety of holistic profiles associated with either success or failure. Fit is assessed against each profile (not competencies)
Assessment Methodology  Candidates performance in a series of independent assessment activities is assessed by independent assessors against specific competencies Use one assessor to observe a group of candidates performing an extensive, long, and highly diverse activity
Decision Making Fill vacancies with successful candidates Identify small number (about 3) of key skills profiles in teams and recompose existing teams to have a balance of these key profiles.

We believe that organisations can benefit greatly from a more creative and thoughtful approach to resourcing and want to encourage our clients to think differently about their resourcing strategies. Below we suggest just four ways that we believe a more creative approach, or what we call an Uncommon Sense approach may benefit you and your organisation when recruiting people.

Uncommon Sense #1 – Keep as many candidates in the system for as long as possible

Common sourcing processes follow three key stages. They first identify potential candidates (e.g., recruitment advertising, databases), then proceed through sifting, and end up with some form of assessment (e.g., interviews, assessment centres). This approach requires attracting twenty to thirty applicants to fill a single vacancy.

Consider a common scenario. A company seeks to fill a single vacancy of a senior manager. It advertises in a national broadsheet and receives about sixty applications. The recruitment sponsor goes through the CVs and uses criteria such as experience, declared skills, declared achievements, qualifications, personal style, and overall impressions to reduce the number from sixty to a more manageable number of six. The short-listed candidates are invited to attend an interview or an assessment centre. Assuming that several applicants, from the original pool of sixty applicants, are potentially good Senior Managers, then the success of the whole recruitment exercise is dependent on the initial sifting that led to the short list. From reading the CVs, it will become quite clear that with the exception of the few people who do not meet the basic selection criteria, the majority of applicants’ declared skills, achievements, and level of experience are all rather similar. Typically, 10% are clearly not suitable, 20% have some of the required skills but are light on some of the other role requirements, and an additional 10% are what we call ‘wild-cards’ – They may be light in some areas, but compensate for it by unique and unmatchable experience in other areas. The remaining 60% are all looking good on paper. Thus, our recruitment sponsor has the task of short-listing six applicants out of a pool of about forty candidates. Their decision is likely to be biased by personal style, un-quantified overall impressions, and unjustifiable peripheral clues (e.g., “the CV is far too long”, “the layout is not neat”, “don’t think much Open University degrees, etc.”)

Let us stretch the proposed scenario a bit further. What if the advertising campaign was a success and generated 120 applications. The same short-listing process would be applied. The common sense principle of ‘reducing applications to a manageable number’ would mean a relatively random selection among candidates who look good on paper. It also means, rejecting quite early on candidates that are potentially good Senior Managers. There is no guarantee that those short-listed are the best candidates. All of a sudden, the obvious common sense seems flawed, and the uncommon sense of ‘keep as many in the system, for as long as possible’ starts to make sense.

Let me take you through a slightly different scenario – A company wishes to start a new 350-people call centre. The industry standard ratio for Call centres is  30:1 (applicants to appointees). This ratio suggests therefore that to get 350 quality employees the organisation would need to attract over 10,000 applications. But where are we going to find 10,000 people in the over-saturated market like call centres? With despair, the organisation decides to appoint a headhunter (or a body shop more likely) who guarantees them 350 ‘bums on seats’, plus a continuous influx of people to replace those who will leave. Body shops are about volume, the only way to get numbers in, is to put forward any person who meets the most basic criteria (e.g., no criminal record, willing to work shifts). No wonder that average turnover in UK call centres runs at up to 80% per year!

By applying the Uncommon Sense approach we discovered far more suitable candidates by looking harder and looking more broadly at each person. The result of this is that in order to recruit 340 call centre operators we actually only needed to attract 2,350 applicants not 10,000. In addition a year later, annual turnover was measured only at 8%.

TABLE #2: Recruitment Ratios

Respond to campaign 2,346 100% 100%
PASS 1st telephone interview 1,760 75% 75%
Psychometrics & 2nd telephone interview 1,672 95% 71%
PASS 2nd telephone interview 1,115 66% 48%
Attend assessment event 1,059 95% 45%
PASS assessment event 352 33% 16%
Accept appointment offer 348 98% 15%

Uncommon Sense #2 – Don’t use one agreed competency framework to recruit people

In the real world, different people with different attitudes, behaviours and values can all be successful at the same role. However, one of the most ingrained and widespread assumption about talent is a fundamental need to believe that a single list of generic qualities can be used to describe all high performing employees. After all, It is much more manageable to search for only one set of attributes than contend with the possibility that people with quite different set of attributes might be equally effective. This assumption is evident in the application of a competency framework. Organisations search for employees that meet all the competencies identified in their competency framework. This means that those appointed are relatively similar, as by definition they should score high on all the competencies identified in the competency framework.

Over the last two decades, most organisations have developed competency models with between about six to a dozen competencies. The key assumption is that candidates must be assessed as adequate against all these competencies. Consider the common scenario, a company specifies that an ideal candidate needs to be structured, methodical, detail-oriented, and logical, as well as flexible, adaptable, creative, strategic thinker, able to think on their feet, and respond well to ambiguity. However, in the real world it is very hard to find a person with these two extremes (apart from truly gifted individuals). Consequently, the company appoints someone who is a bit structured and organised and a bit flexible and adaptable. They justify it using the all-time winning statement ‘This is a well-rounded candidate.’

I view it differently. The appointed candidate is neither structured, nor adaptable. I propose that success requires being either very structured OR very flexible – a bit of this and a bit of that is what I would call a ‘B’ Player – an OK performer, but nothing exceptional – A relatively ‘bland’ or unexceptional individual who can do the role but not excel in it.

The uncommon sense approach focuses on multiple combinations of qualities – Something I term as ‘Success Profiles’. At its core is detailed profiling of what success looks like for a particular role in a particular organisation. The outcome is a series of distinct ‘A Player’ profiles that are linked to a multiplicity of success criteria (high performance, retention, organisational fit, and employee satisfaction). This approach seeks to identify individuals who match to a distinct success profile (out of several possible profiles). Such individuals do not typically fare well at traditional assessment methods because although they score extremely high on some competencies they typically have gaps in other areas – as they are less well rounded overall.

Naturally, the profiling activity also identifies an independent series of different profiles that are linked to failure(though these are not necessarily an opposite to the success profiles).

The following are examples of two Success profiles and two Failure profiles identified for an organisation within the air transport industry:

  • HARRIER: Does not need a long runway to lift itself of the ground. Takes a lot of energy to raise and get the perspective from above ground, but once airborne, it can move very fast and with great degree of agility – Strategic thinker, non-impulsive, yet decisive when enough information is gathered. Agile individual who can operate in all task and people-related environments
  • HERCULAS: May appear slow and cumbersome, but has enormous capacity to absorb and carry everything thrown at him. Takes on enormous responsibility, support everyone, and step-in for help. When airborne, carries the whole team with him. A dedicated and reliable ‘work-horse’ that does everything required, and much more. May not be elegant, but compensate through sheer power of dedication and outstanding productivity. The backbone of the team. Practical, solid and dependable within set operational parameters
  • HOT-AIR BALLOON: Does not have any controls or steering to allow direction determination. Highly dependent on external (weather) conditions. Moves slowly, but once airborne is colourful, a lot fun, and very impressive. Yet, if you look inside the colourful balloon, there is nothing but hot air. It really doesn’t take much to pop the balloon and cause it to drop from the sky. A fun loving, entertaining, but light-weight individual. All front, no core or essence.
  • TORNEDO: Fast, powerful, and decisive, but requires a lot of support from ground staff to keep in shape and be able to operate (high maintenance). Arrogantly powers ahead towards one identified solution, ignores others views and alternative approaches. Once a button is pressed to release a missile, there is no way back. Impulsive, does not reflect, and can be a liability when dealing with delicate issues. 

Uncommon Sense #3 – It is economical to use large-scale assessment centres

The Assessment Centre is a common methodology to select the successful candidates from a short-list. Typically, a small number of candidates (about 6 per centre) complete a variety of exercises. Each exercise is linked to small number of competencies, and each is observed by a different assessor. The final decision is based on consensus among the various assessors, and reflects the performance of the candidate across a whole raft of activities.

This all makes sense. Yet, this common sense approach is costly, as it requires on average a ratio of two assessors to three candidates. In our call centre example, where more than 1,000 candidates need to be assessed, the notion of an Assessment Centre appears exceptionally expensive, inefficient, and painfully slow. No wonder that many large recruitment campaigns opt for a body-shop solution – “Let the recruiters identify suitable people. We don’t need to assess, simply hold a brief interview with each proposed person”. 

The Uncommon Sense approach is very different. It uses an engaging half-day single activity event for a large volume of candidates (this can cater for over fifty candidates at a time). Candidates work in small teams who collaborate and compete, subjected to cleverly tailored interventions that test the identified success and failure profiles. The well-established assessment centre ratio of 2:3, is slashed here to one assessor per six candidates. Now that makes financial sense – the uncommon sense becomes even more attractive, when considering that the same assessment team can run two groups per day, enabling the assessment of over hundred candidates per day, and a thousand candidates over ten working days. Common sense argues that the assessment centre should be based on a work stimulation as a close to the real environment as possible.  However, such an approach has been shown (Justin Menkes: Executive Intelligence) to be a better test of previous job knowledge than of ability.

 An uncommon sense approach is the Art Event, a creative activity focused for a large candidates’ group, who works in small teams. The large group has to create a drawing on a massive scale, while each of the small teams has a specific contribution to make towards this overall group outcome (see Picture #1 as an example of a 15-panel drawing – 3.3m x 2.8). The event is staged to include tailor-made realistic interventions that test the specific success profiles and failure profiles. Picture #2 is a visual of a team working on the picture; whereas Picture #3 displays the final outcome (a different picture) – note the importance of team collaborating in order to ensure that different panels connect well.

Uncommon Sense #4 – Don’t appoint “A Team” players

Once assessed, it makes sense to appoint the strongest candidates. But this common sense has a built-in flaw. It demolishes diversity within the organisation, and diminishes the potential of the organisation to tackle future challenges. We end-up with a one-dimensional organisation, with a clearly defined, but rather narrow, set of skills, and limited capacity to develop complimentary skills to respond to changing demands.

Football teams provide a good analogy. In a football team there are three key roles – Defender, Midfield, and Forward. Any team must have a good balance of players in each of the roles. A team of only Forward players, regardless the fact that they are all world-class (‘A’ players), is imbalanced and ineffective.

The same applies to work. The team cannot have only one type of success profile to be effective. The number of success and failure profiles will vary, but for the purpose of team configurations, it is useful to identify a small number of core success profiles, and ensure that each team has a relatively balanced spread of the different profiles.

Uncommon sense advocates the assessment of all team members and classification of each member into one of the key success profiles. Then, examining the configuration of the team, and ensuring that teams have a good balance of the different profiles, or at minimum a representation of all the profiles within it. Hence, when recruiting new members, the team they fit into should be considered. Table #3 offers a hypothetical example of three 9-member groups with varied distribution of three success profiles (X, Y, & Z) and the actions required to balance these with minimal disruption.

An uncommon sense approach, we believe, can lead to faster, cheaper and more robust results for all resourcing strategies. An uncommon approach can therefore lead to uncommonly good results.

TABLE #3: Teams’ reconfiguration
  Before ACTION After
Team A X X XX X XX X X Transfer 3 ‘X’ profiles to Team C. Get 1 ‘Z’ profile and 2 ‘Y’ profiles in return X X XX X XZ Y Y
Team B X X XZ Z XY Y Y No Change X X XZ Z XY Y Y
Team C X Y YY Y YZ Z Z Exchange with Team A 2 ‘Y’ profiles and 1 ‘Z’ profile for 3 ‘X’ profiles X X XY Y YX Z Z

Validation Study

The data for this study was drawn from a series of over 20 recruitment campaigns using the Uncommon Sense approach. About 45% of the candidates short-listed for the various campaigns were rated ‘A Player’ on the Uncommon Sense process. About a third were rated ‘B-Players’ and just about 22 who were rated ‘C-Players’. Candidates success was measured on a 6-point scale based on Board Interview (client) and assessment centre findings: 5= Excellent; 4= Strong; 3= Above the line; 2= Below the line; 1= Poor; 0= Disaster. Table # 4 contrasts initial stages of the campaign with the final stage of an Assessment Centre.

TABLE #4: Validation Findings
Candidates’ Scores
  5 4 3 2 1 0  
A Player 25% 58% 17%       100%
B Player   12% 44% 44%     100%
C Player       50% 33% 17% 100%


  • Previous campaigns – One candidate in 6 short-listed was appointable (17%)
  • This campaign – about 75% of the  short-listed candidates were deemed appointable
  • When counting only recommended candidates (‘A’ players) the figure of appointable candidates reaches a staggering 93%.

3 Comments : more...

Get Adobe Flash playerPlugin by wpburn.com wordpress themes

Looking for something?

Use the form below to search the site:

Still not finding what you're looking for? Drop a comment on a post or contact us so we can take care of it!

Tweet Blender

minosiantsminosiants: The Myers-Briggs Type Indicator (MBTI) assessment is a psychometric questionnaire designed to measure psychological... http://ff.im/jAp1E
9 years ago
bubbabelbubbabel: Worn out from psychometric test ...
9 years ago
goldinyoureyesgoldinyoureyes: i've been studying my butt off for psychometric methods & i'm still screwed....i just have to get a B, oh gosh...fingers are crossed i will!
9 years ago
DavisPTnetworkDavisPTnetwork: Latest PT News: Psychometric Properties of a Peer-Assessment Program to Assess Continuing Competence in Physical T... http://bit.ly/9YjkvW
9 years ago


All entries, chronologically...