Discover more from Joy@Work
Competence Vs Competency - What's the difference?
Competence Vs competency
The concept of competence remains one of the most diffuse terms in the organisational and occupational literature (Nordhaug and Gronhaug, 1994). Exactly what does an author mean when using any of the terms of competence?
The concept of individual competence is widely used in human resource management (Boyatzis, 1982, Schroder, 1989, Burgoyne, 1993). This refers to a set of skills that an individual must possess in order to be capable of satisfactorily performing a specified job. Although the concept is well developed, there is continuing debate about its precise meaning.
Others take a job-based competence view that according to Robotham and Jubb (1996) can be applied to any type of business where the competence-based system is based on identifying a list of key activities (McAuley, 1994) and behaviours identified through observing managers in the course of doing their job.
A useful view is to look at competence to mean a skill and the standard of performance, whilst competency refers to behaviour by which it is achieved (Rowe, 1995). That is, competence describes what people do and competency describes how people do it.
Rowe (1995, p16) further distinguishes the attributes an individual exhibits as “morally based” behaviours – these are important drivers of behaviours but especially difficult to measure – and “intellectually based” behaviours as capabilities or competencies. Capabilities are distinguished as these refer to development behaviours – i.e. are graded to note development areas to improve behaviours in how people undertake particular tasks.
Young (2002) develops on a similar theme and builds on Sarawano’s (1993) model, linking competency and competence to performance and identifies competency as a personal characteristic (motives, traits, image/role and knowledge) and how the individual behaves (skill). Competence is what a manager is required to do – the job activities (functions, tasks). These in turn lead to performance of the individual [manager].
Jacobs (1989) considers a distinction between hard and soft competences. Soft competences refer to such items as creativity and sensitivity, and comprise more of the personal qualities that lie behind behaviour. These items are viewed as being conceptually different from hard competences, such as the ability to be well organised. Jacob’s distinction fits neatly into Young’s model with hard competences referring to identifiable behaviours, and soft competences as the personal characteristics of the individual.
Further distinctions relate to the usefulness of measuring competenc[i]es.
Cockerill et al. (1995) define threshold and high-performance competences. Threshold competences are units of behaviour which are used by job holders, but which are not considered to be associated with superior performance. They can be thought of as defining the minimum requirements of a job. High performance competences, in contrast, are behaviours that are associated with individuals who perform their jobs at a superior level.
In the UK, the Constable and McCormick Report (1987) suggested that the skill base within UK organisations could no longer keep pace with the then developing business climate. In response, the Management Charter Initiative sought to create a standard model where competence is recognised in the form of job-specific outcomes. Thus, competence is judged on performance of an individual in a specific job role. The competences required in each job role are defined through means of a functional analysis – a top-down process resulting in four levels of description:
Units of competence
Elements of competence
Elements are broken down into performance criteria, which describe the characteristics of competent performance, and range statements, which specify the range of situations or contexts in which the competence should be displayed.
The MCI model now includes personal competence, missing in the original, addressing some of the criticisms levelled at the MCI standards. Though the model tends to ignore personal behaviours which may underpin some performance characteristics, particularly in the area of management, where recent work has indicated the importance of behavioural characteristics such as self-confidence, sensitivity, proactivity and stamina.
The US approach to management competence, on the other hand, has focused heavily on behaviours. Boyatzis (1982) identifies a number of behaviours useful for specifying behavioural competence. Schroder (1989) also offers insights into the personal competencies which contribute to effective professional performance.
Personal competencies and their identifying behaviours form the backbone of many company-specific competency frameworks and are used extensively in assessment centres for selection purposes.
This is because behavioural (or personal) competence may be a better predictor of capability – i.e. the potential to perform in future posts – than functional competence – which attests to competence in current post. The main weakness of the personal competence approach, according to Cheetham and Chivers (1996), is that it doesn’t define or assure effective performance within the job role in terms of the outcomes achieved.
In his seminal work “The Reflective Practitioner”, Schon (1983) attempts to define the nature of professional practice. He challenges the orthodoxy of technical rationality – the belief that professionals solve problems by simply applying specialist or scientific knowledge. Instead, Schon offers a new epistemology of professional practice of ‘knowing-in-action’ – a form of acquired tacit knowledge – and ‘reflection’ – the ability to learn through and within practice. Schon argues that reflection (both reflection in action and reflection about action) is vital to the process professionals go through in reframing and resolving day-to-day problems that are not answered by the simple application of scientific or technical principles.
Schon (1983) does not offer a comprehensive model of professional competence, rather he argues that the primary competence of any professional is the ability to reflect – this being key to acquiring all other competencies in the cycle of continuous improvement.
There are criticisms of competency-based approaches to management and these tend to argue that managerial tasks are very special in nature, making it impossible to capture and define the required competences or competencies (Wille, 1989). Other writers argue that management skills and competences are too complex and varied to define (Hirsh, 1989, Canning, 1990) and it is an exercise in futility to try and capture them in a mechanistic, reductionist way (Collin, 1989). Burgoyne (1988) suggests that the competence-based approach places too much emphasis on the individual and neglects the importance of organisational development in making management development effective. It has also been argued that generic lists of managerial competences cannot be applied across the diversity of organisations (Burgoyne, 1989b, Canning, 1990).
Linking competency models to organisation outcomes
Some writers have identified competencies that are considered to be generic and overarching across all occupations. Reynolds and Snell (1988) identify ‘meta-qualities’ – creativity, mental agility and balanced learning skill – that they believe reinforces other qualities. Hall (1986) uses the term ‘meta-skills’ – as skills in acquiring other skills. Linstead (1991) and Nordhaug and Gronhaug (1994) use the term ‘meta-competencies’ to describe similar characteristics. The concept of meta-competence falls short of providing a holistic, workable model, but it does suggest that there are certain key competencies that overarch a whole range of others.
There is however, some doubt about the practicability of breaking down the entity of management into its constituent behaviours (Burgoyne, 1989a). This suggests that the practice of management is almost an activity that should be considered only from a holistic viewpoint.
Baker et al. (1997) link the various types of competence by first establishing a hierarchy of congruence as a backbone to the model. In broad terms, they describe the congruence of an entity to be the degree of match or fit between some external driver to the entity and the response of that entity to the driver. This method enables them to take into consideration the idea that management, as an entity, and the individuals who perform the function do so within a particular environment. Measurement of congruence or goodness of fit, has been attempted in studies of operations (Cleveland et al., 1989, Vickery, 1991). Baker et al.’s hierarchy is shown in Figure below, with four levels of congruence: 1) Organisation level, 2) Core business process level, 3) Sub-process within core process level, and 4) Individuals level.
At the organisation level, there is congruence when a firm adopts a strategy that is consistent with the competitive priorities derived from the firm’s business environment. The strategy, in turn, determines the operational priorities of the firm, following Platts and Gregory (1990), Baker et al. (1997) using their own terminology, consider these operational priorities to drive the core processes of the firm. These, in turn, can be broken down into a number of sub-processes – and congruence is needed between the sub-processes and the core processes. At the individual level, the skills and knowledge should also match the priorities driven by the sub-processes.
This hierarchical model follows a traditional approach that structure follows strategy (Vickery, 1991, Cleveland et al., 1989, Kim and Arnold, 1992). Others view that competences are a part of the structure of the firm and should influence strategy making, Bhattacharaya and Gibbons (1996) point out that Prahalad and Hamal (1990) and Stalk et al. (1992) take this approach.
The hierarchical model has been tested analysing case studies of seventeen manufacturing plants that won Best Factory Awards during the period 1993-95 in the UK (Cranfield) and established benchmarks. Baker et al. (1997) found some direct cause-effect links between enabling competences at the sub-process level and competitive performance (at the core process level). However, they also found many ‘best practices’ such as employee empowerment and team working which were harder to link to specific competitive competences.
This model provides an insightful way to break down the complex issue of how individual performance influences the competitive competences of the firm. Baker et al.’s research is limited within the manufacturing sector where core processes are often easier to identify and define with a clear delineation of individual effort, technology and product. It is also established on the basis that structure follows strategy – whereas, most firms will already have structure and will be adapting their strategies continuously as the external environment changes.
Cheetham and Chivers (1996) describe a model of competence that draws together the apparently disparate views of competence - the ‘outcomes’ approach and the ‘reflective practitioner’ (Schon, 1983, Schon, 1987) approach.
Their focus was to determine how professionals maintain and develop their professionalism. In drawing together their model, they consider the key influences of different approaches and writers. The core components of the model are: Knowledge/cognitive competence, Functional competence, Personal or behavioural competence and Values/ethical competence with overarching meta-competencies include communication, self-development, creativity, analysis and problem-solving. Reflection in and about action (Schon, 1983) surround the model, thereby bringing the outcomes and reflective practitioner approaches together in one model shown in Figure below.
Cheetham and Chivers model of professional competence is useful in bringing the concept of individual competence to bear on the competence of the organisation in a non-manufacturing context, but it still falls short of providing a useful model to link an individuals behaviour with the business results of an organisation across industries – a generic model if you will.
Young (2002) creates a generic model neatly, by developing his individual model further to the organisational perspective adopting the concept of core competence, as articulated by Prahalad and Hamal (1990) and further developed by Stalk et al. (1992) and Tampoe (1994), suggesting that the collection of individual competences within the organisation create the organisational core competence.
This model provides a way to understand how developing competency (personal characteristics and behaviours) at the individual level enables an individual to demonstrate competence (the functions and tasks of the job) which in turn cascades through a hierarchy of the organisation (core competence and other activities supporting the organisation) to deliver business results.
ALESSI, S. M. (1988) Fidelity in the Design of Instructional Simulations. Journal of Computer-Based Instruction, 15, 40-47.
ALLIGER, G. M., TANNENBAUM, S., BENNET, W., TRAVER, H. & SHOTLAND, A. (1997) A Meta-Analysis of the Relations among Training Criteria. Personnel Psychology, 50, 341-358.
ANDERSON, J. R. (1982) Acquisition of Cognitive Skills. Psychology Review, 89, 369-406.
ANDERSON, P. H. & LAWTON, L. (1997) Demonstrating the Learning Effectiveness of Simulations: Where We are and Where We Need to Go. Developments in Business Simulation & Experiential Exercises, 24, 68-73.
BAILEY, J. & WITMER, B. (1994) Proceedings of Human Factors & Ergonomics Society.
BEDINGHAM, K. (1997) Proving the Effectiveness of Training. Industrial and Commercial Training, 29, 88-91.
BRINKERHOFF, R. O. (1988) An Integrated Evaluation Model for HRD. Training & Development, 42, 66-68.
BRINKERHOFF, R. O. (1989) Achieving Results from Training, San Francisco, Josey-Bass.
BURGOYNE, J. & COOPER, C. L. (1975) Evaluation Methodology. Journal of Occupational Psychology, 48, 53-62.
BURGOYNE, J. & SINGH, R. (1977) Evaluation of Training and Education: Micro and Macro Perspectives. Journal of European Industrial Training, 1, 17-21.
BUTLER, R. J., MARKULIS, P. M. & STRANG, D. R. (1988) Where Are We? An Analysis of the Methods and Focus of the Research on Simulation Gaming. Simulation & Games, 19, 3-26.
CAMPBELL, D. T. & STANLEY, J. C. (1966) Experimental and Quasi-Experimental Design for Research, Chicago, Rand-McNally.
CAMPBELL, J. P., DUNNETTE, M. D., LAWLER, E. E. & WEICK, K. E. (1970) Managerial Behaviour, Performance and Effectiveness, Maidenhead, McGraw-Hill.
CLARK, R. & CRAIG, T. (1992) Research and Theory on Multi-Media Learning Effects. IN GIARDINA, M. (Ed.) Interactive Learning Environments; Human Factors and Technical Consideration on Design Issues. Berlin, Springer-Verlag.
COLLINS, D. B. (In Press) Performance-Level Evaluation Methods Used in Management Development Studies from 1986-2000. Human Resource Development Quarterly.
DEDE, C. (1997) The Evolution of Constructivist Learning Environments. Educational Technology, 52, 54-60.
DRUCKMAN, D. (1995) The Educational Effectiveness of Interactive Games. IN CROOKALL, D. & ARAI, K. (Eds.) Simulation and Gaming Across Disciplines and Cultures: ISAGA at a Watershed. Thousand Oaks, CA., Sage.
DRUCKMAN, D. & BJORK, A. (1994) Learning, Remembering, Believing: Enhancing Human Performance, Washington DC, National Academy Press.
EASTERBY-SMITH, M. (1980) The Evaluation of Management and Development: an Overview. Personnel Review, 10, 28-36.
EASTERBY-SMITH, M. (1994) Evaluating Management Development, Training and Education, Aldershot, Gower.
EASTERBY-SMITH, M. & ASHTON, D. J. L. (1975) Using Repertory Grid Technique to Evaluate Management Training. Personnel Review, 4, 15-21.
EASTERBY-SMITH, M., THORPE, R. & LOWE, A. (1991) Management Research: An Introduction, London, Sage.
FEINSTEIN, A. H. & CANNON, H. M. (2002) Constructs of Simulation Evaluation. Simulation and Gaming, 33, 425,440.
FILSTEAD, W. J. (1979) Qualitative Methods: A Needed Perspective in Evaluation Research. IN COOK, T. D. & REICHARDT, C. S. (Eds.) Qualitative and Quantitative Methods in Evaluation Research. Beverly Hills, Sage.
GAGNE, R. M. (1984) Learning outcomes and their effects: Useful categories of human performance. American Psychologist, 39, 377-385.
GOPHER, D., WEIL, M. & BAREKET, T. (1994) Transfer of skill from a computer game trainer to flight. Human Factors, 36, 387-405.
GREDLER, M. E. (1996) Educational Games and Simulations: A Technology in search of a (Research) Paradigm. IN JONASSEN, D. H. (Ed.) Handbook of Research for Educational Communications and Technology. New York, Simon & Schuster Macmillan.
GREENO, J., SMITH, D. & MOORE, J. (Eds.) (1993) Transfer on Trial: Intelligence, cognition and instruction, Norwood NJ, Ablex.
GUBA, E. G. & LINCOLN, Y. S. (1989) Fourth Generation Evaluation, London, Sage.
HAMBLIN, A. C. (1974) Evaluation and Control of Training, Maidenhead, McGraw Hill.
HAYS, R. T. & SINGER, M. J. (1989) Simulation fidelity in training systems design: Bridging the gap between reality and training., New York, Springer-Verlag.
HESSELING, P. (1966) Strategy of Evaluation Research in the Field of Supervisory and Management Training, Anssen, Van Gorcum.
HODGES, M. (1998) Virtual Reality in Training. Computer Graphics World.
HOLTON, E. F., III (1996) The Flawed Four-Level Evaluation Model. Human Resource Development Quarterly, 7, 5-21.
JENKINS, D., SIMMONS, H. & WALKER, R. (1981) Thou Nature are my Goddess. Naturalistic Enquiry in Educational Evaluation. Cambridge Journal of Education, 11, 169-89.
KELNER, S. P. (2001) A Few Thoughts on Executive Competency Convergence. Center for Quality of Management Journal, 10, 67-71.
KIRKPATRICK, D. (1959/60) Techniques for evaluating training programs: Parts 1 to 4. Journal of the American Society for Training and Development, November, December, January and February.
KIRKPATRICK, D. L. (1994) Evaluating Training Programs: The Four Levels, San Francisco, Berret-Koehler.
KIRKPATRICK, D. L. (1998) Evaluating Training Programs: Evidence vs. Proof. IN KIRKPATRICK, D. L. (Ed.) Another Look at Evaluating Training Programs. Alexandria, VA, ASTD.
KRAIGER, K., FORD, J. K. & SALAS, E. (1993) Application of cognitive, skill-based, and affective theories of learning outcomes to new methods of training evaluation. Journal of Applied Psychology, 78, 311-328.
MACDONALD-ROSS, M. (1973) Behavioural Objectives: A Critical Review. Instructional Science, 2, 1-52.
MCCLELLAND, D. C. (1973) Testing for Competence Rather Than Intelligence. American Psychologist, 28, 1-14.
MCKENNA, S. (1996) Evaluating IMM: Issues for Researchers. Charles Stuart University.
MILES, R. H. & RANDOLPH, W. A. (1985) The Organisation Game: A Simulation, Glenview, Il, Scott, Foresman and Company.
MILES, W. G., BIGGS, W. D. & SCHUBERT, J. N. (1986) Students Perceptions of Skill Acquisition Through Cases and a General Management Simulation: A Comparison. Simulation & Games, 17, 7-24.
MOSIER, N. R. (1990) Financial Analysis: The Methods and Their Application to Employee Training. Human Resource Development Quarterly, 1, 45-63.
PATTON, M. Q. (1978) Utilization-Focussed Evaluation, Beverly Hills, Sage.
PEGDEN, C. D., SHANNON, R. E. & SADOWSKI, R. P. (1995) Introduction to Simulation using SIMAN, Hightstown, NJ, McGraw-Hill.
PIERFY, D. A. (1977) Comparative Simulation Game Research: Stumbling Blocks and Steppingstones. Simulation and Gaming, 8, 255-68.
RACKHAM, N. (1973) Recent Thoughts on Evaluation. Industrial and Commercial Training, 5, 454-61.
REDDIN, W. J. (1970) Managerial Effectiveness, London, McGraw Hill.
REEVES, T. (1993) Research Support for Interactive Multimedia: Existing Foundations and New Directions. IN LATCHEM, C., WILLIAMSON, J. & HENDERSONLANCETT, L. (Eds.) Interactive Multimedia. London, Kogan Page.
RICCI, K., SALAS, E. & CANNON-BOWERS, J. A. (1996) Do Computer-based Games Facilitate Knowledge Acquisition and Retention? Military Psychology, 8, 295-307.
ROSE, H. (1995) Assessing Learning in VR: Towards Developing a Paradigm. HITL.
RUSS-EFT, D. & PRESKILL, H. (2001) Evaluation in Organizations
A Systematic Approach to Enhancing Learning, Performance, and Change, Cambridge, MA., Perseus Publishing.
RUSSELL, S. (1999) Evaluating Performance Interventions. Info-line.
SALZMAN, M. C., DEDE, C., R., B. L. & CHEN, J. (1999) A Model for Understanding How Virtual Reality Aids Complex Conceptual Learning. Presence: Teleoperators and Virtual Environments, 8, 293-316.
SCRIVEN, M. (1972) Pros and Cons about Goal-Free Evaluation. Evaluation Comment, 3, 1-4.
SEASHORE, S. E., INDIK, B. P. & GEORGOPOULUS, B. S. (1960) Relationships Among Criteria of Effective Job Performance. Journal of Applied Psychology, 44, 195-202.
SPENCER, L. M. & SPENCER, S. (1993) Competence at Work: Models for Superior Performance, New York, John Wiley & Sons.
STAKE, R. E. (1980) Responsive Evaluation. University of Illinois.
STANNEY, K., MORRANT, R. & KENNEDY, R. (1998) Human Factor issues in Virtual Environments. Presence: Teleoperators and Virtual Environments, 7, 327-351.
TEACH, R., D. & GIOVAHI, G. (Eds.) (1988) The Role of Experiential Learning and
Simulation in Teaching Management Skills.
TEACH, R. D. (Ed.) (1989) Using Forecast Accuracy as a Measure of Success in Business Simulations.
THOMAS, R., CAHILL, J. & SANTILLI, L. (1997) Using an Interactive Computer Game to Increase Skill and Self-efficacy Regarding Safer Sex Negotiation: Field Test Results. Health Education & behavior, 24, 71-86.
WARR, P. B., BIRD, M. W. & RACKHAM, N. (1970) Evaluation of Management Training, Aldershot, Gower.
WHITE, B. (1984) Designing Computer Games to Help Physics Students Understand Newton's Laws of Motion. Cognition and Instruction, 1, 69-108.
WHITEHALL, B. & MCDONALD, B. (1993) Improving Learning Persistence of Military Personnel by Enhancing Motivation in a Technical Training Program. Simulation & Gaming, 24, 294-313.
WHITELOCK, D., BIRNA, P. & HOLLAND, S. (1996) Proceedings. IN EDITIONS, C. (Ed.) European Conference on AI in Education. Lisbon Portugal, Colibri Editions.
WIEBE, J. H. & MARTIN, N. J. (1994) The Impact of a Computer-based Adventure Game on Achievement and Attitudes in Geography. Journal of Computing in Childhood Education, 5, 61-71.
WIMER, S. (2002) The Dark Side of 360-degree. Training & Development, 37-42.
WITMER, B. & SINGER, M. J. (1994) Measuring Immersion in Virtual Environments. ARI Technical Report 1014.
WOLFE, J. (1981) Research on the Learning Effectiveness of Business Simulation Games: A review of the state of the science. Developments in Business Simulation & Experiential Exercises, 8, 72.
WOLFE, J. (1985) The Teaching Effectiveness of Games in Collegiate Business Courses: A 1973-1983 Update. Simulation & Games, 16, 251-288.
WOLFE, J. (Ed.) (1990) The Guide to Experiential Learning, New York, Nichols.
WOLFE, J. & CROOKALL, D. (1998) Developing a Scientific Knowledge of Simulation/Gaming. Simulation & Gaming, 29, 7-19.
WOOD, L. E. & STEWART, P. W. (1987) Improvement of Practical Reasoning Skills with a Computer Game. Journal of Computer-Based Instruction, 14, 49-53.
YOUNG, M. (2002) Clarifying Competency and Competence. Henley Working Paper. Henley, Henley Management College.