Training is the most important function that directly contributes to the development of human resources. This also happens to be a neglected function in most of the organisations. Recent surveys on the investments made by Indian organisations on training indicate that a large number of organisations do not even spend 0.1 per cent of their budget on training. Many organisations do not even have a training department. If human resources have to be developed, the organisation should create conditions in which people acquire new knowledge and skills and develop healthy patterns of behaviour and styles. One of the main mechanisms of achieving this environment is institutional training.
Training is essential because technology is developing continuously and at a fast rate. Systems and practices get outdated soon due to new discoveries in technology, including technical, managerial and behavioural aspects. Organisations that do not develop mechanisms to catch up with and use the growing technology soon become stale. However, developing individuals in the organisation can contribute to its effectiveness of the organisation.
Such development, however, should be monitored so as to be purposeful. Without proper monitoring, development is likely to increase the frustration of employees if when, once their skills are developed, and expectations raised, they are not given opportunities for the application of such skills. A good training sub-system would help greatly in monitoring the directions in which employees should develop in the best interest of the organisation. A good training system also ensures that employees develop in directions congruent with their career plans.
A SUGGESTED TRAINING SYSTEM
A good system of training starts with the identification of training needs. The following sources can be used for identifying training needs.
Performance Review Reports
Performance review reports help in identifying directions in which the individuals should be trained and developed. On the basis of the annual appraisal reports, various dimensions of training can be identified. Training needs identified on the basis of performance appraisal, provide good information for organising in-company training, and on-the-job training for a select group of employees.
Training needs identified on the basis of potential appraisal, would become inputs for designing training programmes or work-out training strategies for developing the potential of a selected group of employees who are identified for performing future roles in the organisation.
Working in the same job continuously for several years without much change may have demotivating effects. Some organisations plan job rotation as a mechanism of maintaining the motivation of people. Training is critical in preparing the employees before placing them in a new job.
Besides these, most of the training programmes that are organised today, aim at equipping the managers with new technology. These training programmes attempt to help the managers raise their present level of effectiveness.
ORGANISING TRAINING PROGRAMMES
After identifying the training needs, the next step is to design and organise training programmes. In large companies it is possible for the training department to organise several in-company training programmes.
For designing the training programme on the basis of the training needs, the following points may be kept in view:
- Wherever there are sizeable number of people having the same training needs, it is advisable to organise an in-company programme. The organisation can save a lot of cost. Besides, by having the group of people from the same work place mutuality can be inculcated. The probability of the trainees actually applying what they have learnt is high because of high group support.
- Whenever new systems have to be introduced training is needed to develop competencies needed to run the systems.
- It is better to aim at in-company programmes for technical skills wherever possible and outside programmes for managerial and behavioural development.
- People performing responsible roles in the organisation should be encouraged to go out periodically for training where they would have more opportunities to interact with executives of other organisations and get ideas as well as stimulate their own thinking.
- The training department should play a dynamic role in monitoring the training activities. It should continuously assess the impact of training and help the trainees in practising whatever they have learnt.
- Whenever an individual is sponsored for training he should be told categorically the reasons for sponsoring him and the expectations of the organisation from him after he returns from the programme.
Most companies do not inform the employees why they have been sponsored; such a practice reduces learning, as the employees sponsored are more concerned about the reasons for being sponsored than actually getting involved in and benefiting from the training.
EVALUATION OF TRAINING
Many organisations, especially industries, have been concerned with the difficult but critical question of evaluation. Training managers or organisers are also concerned with this question. All books on training have dealt with this issue, but no satisfactory and comprehensive accounts of evaluation are available.
For the preparation of a comprehensive conceptual framework of training evaluation and an effective strategy of evaluating training programmes and system, it is necessary to consider several aspects of evaluation. The basic question in this regard relates to the value of evaluation: why evaluate training? Hamblin has discussed this question very well—that evaluation helps in providing feedback for improvement (and better control) of training. When we discuss feedback and improvement, two relevant questions are raised: feedback to whom? Improvement of what? The former question relates to the main client groups, and the latter to the main dimensions and specific areas of evaluation.
Two additional questions are: how should evaluation be done? What specific ways should be adopted for it? These questions relate to the design and techniques of evaluation, respectively.
There are several partners in the training act and process, and all of them are the client of evaluation. Their needs for feedback and use of feedback for improvement (control) will naturally be different with some overlapping. There are four main partners in training (and clients for evaluation):
- The participants or learners (P)
- The training organisation or institute (I) including
(a) Curriculum planners (CP)
(b) Programme designers (PD)
(c) Programme managers (PM)
- The faculty or facilitators or trainers (F)
- The client organisation, the ultimate user and financier of training (O)
Literature on training evaluation has not paid due attention to this respect.
DIMENSIONS OF EVALUATION
Attention has been given to the main dimensions of training, and most of the suggested models are based on these. Four main dimensions have usually been suggested: Contexts, Inputs, Outputs, and Reaction. The last dimension is not in the same category as the other three. Reaction evaluation can be of contextual factors, training inputs, and outcomes of training.
In all discussions of training evaluation the most neglected aspect has been the training process which cannot be covered by training inputs. The climate of the training organisation, the relationship between participants and trainers, the general attitudes and approaches of the trainers, training methods, etc., are very important aspects determining the effectiveness of training. Evaluation of the training process, therefore, should constitute an important element. We may thus have four main dimensions of evaluation: Evaluation Of Contextual Factors (C), Evaluation Of Training Inputs (I), Evaluation Of Training Process (P), and Evaluation Of Training Outcomes (O).
AREAS OF EVALUATION
The various areas of training evaluation need more attention and elaboration. Seven main areas, with some sub-areas under each, are suggested for consideration. These are shown in Exhibit 1 in sequential order; the exhibit also shows the conceptual model of training, by relating the areas to the dimensions. This model is based on the following assumptions.
- Effectiveness of training depends on the synergic relationship and collaborative working amongst the four major partners of training (participants, training organisation, trainers and client organisation). Hence evaluation should provide the necessary feedback to these for contributing to training effectiveness.
- Training effectiveness depends not only on what happens during training, but also on what happens before the actual training (pre-training factors) and what happens after the training has formally ended (post-training factors). Evaluation cannot neglect these important contextual factors.
- Various aspects of the training process that are not direct training inputs (for example also contribute to its effectiveness. Evaluation should, therefore, also focus on these factors.
- The focus or the main task of evaluation should not only be in the nature of auditing (measuring training outcomes in terms of what has been achieved and how much), but should also be diagnostic (why the effectiveness has been low or high), and remedial (how effectiveness can be raised).
DESIGN OF EVALUATION
The overall design of evaluation helps in planning the evaluation strategy in advance. Evaluation designs can be classified in various ways. Two important dimensions, however, are the time when evaluation is done (or data are collected), and the group, or groups involved in evaluation (or data collection). Data on relevant aspects may either be collected only once after the training is over, or on two (or several) occasions before training interventions, and later again, after the training is over. On the other hand, only one or more group that undergoes training may be involved in evaluation. These methods give us four basic designs of evaluation.
Longitudinal Design (L) is one in which data are collected from the same group over a length of time , usually on several occasions, but at least twice, i.e., before and after training. In the latter case, it is called “before-after” design.
In Ex Post Facto Design (E), data are collected from the group which has been exposed to training only after the training is over. Obviously, this design has inherent limitations in drawing conclusions from evaluation. But in many practical situations this is reality, and is a challenge for evaluation designers to devise ways of extracting the most in such a design.tched
Comparative Survey Design (S) may involve collection of data from many other groups, in addition to the group exposed to training. In this design also there is no control and there are limitations in drawing conclusions.
The design with a great deal of control and sophistication is the Matched Group Design (M). Several variations of this design can be used. Another group, matched on some significant dimensions with the group being exposed to training, can be identified, and data can be collected from both, once (ex post facto) or several times (longitudinal). Or, matched sampling can be selected for a comparative or cross-sectional survey. The design can be made very sophisticated with several matched groups (one with training “treatment”, another with a different type of treatment, and the third with no treatment, combined with E and L designs, and making it a “blind” study investigators not knowing which group is of what category). Both experimental and quasi-experimental designs can be used.
Enough literature on these designs is available. Hamblin has referred to some of these, but not in a systematic way. He makes a distinction between the “Scientific” approach (rigorous evaluation to test hypotheses of change) and the “Discovery” approach (evaluation to discover intended and unintended consequences). This distinction does not serve any purpose and is, in fact, misleading. There can be variations in the degree of sophistication and rigour. Also, there may be different objectives of evaluation. Evaluation may be used as part of the training process to provide feedback and plan for using feedback. Evaluation may be made to find out what changes have occurred in terms of scope, substance and sustenance in the letter case, the design will be more complex and more sophisticated. As already discussed, the purpose of evaluation will began on the main clients of evaluation and what they want to know.
These can be classified in various ways. One way to classify them into Response (Reactive) Techniques (R). Techniques requiring some kind of response produce some reaction in those who are responding. The very act of asking people questions (orally or in a written form) may produce change. Since they produce reactions they are called response or reactive techniques.
Other techniques can be called unobtrusive measures or secondary source data technique(s); the word “Unobstrusive” being borrowed from Webb et al. (1970). These make use of available data or secondary source data. Hamblin calls them “keyhole” techniques, thereby expressing his disapproval of such measures.’ There is no reason to consider such measures as unethical. All indicators, indexes, etc., are such measures. For example, to measure whether general morale has improved in a unit, it may be more useful to use secondary source data like examining figures of absenteeism rather than asking questions. Similarly, an unobtrusive measure or secondary source data may be much more creative and imaginative and need to be discovered and used more often for evaluation. However, if some data are collected about individuals’ behaviour (whether by asking others or unobtrusively) without their knowledge and approval, which may be unethical. This applies as much to responsive techniques as to unobtrusive ones, because collecting information from a third person without the approval or knowledge of the person being studied, is unethical.
Another non-reactive technique, a very old one, is that of Observation (O). Observation can also become a reactive technique if persons being observed know that they are being observed.
The method of data collection for Response Or Reaction Techniques (R) may include interviews, written reactions (questionnaires, scales, open-ended forms), and projective techniques. One additional method in this category worth mentioning is group discussion and consensus report. In many cases, discussion by a small group consisting of individuals having experience and with adequate knowledge about it may give better evaluation results than figures calculated from routine responses.
Advances in Scaling techniques have made the greatest contribution to the development of evaluation techniques. Techniques based on well-prepared instruments to measure various dimensions are being increasingly used. Various methods of scaling can be used to develop effective evaluation techniques. The three well-known scaling techniques associated with Thurstone, Likert, and Guttman, can be imaginatively used in preparing new evaluation tools. More recent developments have opened new vistas for sophistication in evaluation work.
Hamblin has done as excellent job in discussing the studies in training evaluation to illustrate the techniques used. His book will be found very useful for this. Whitelaw has also cited some studies but has not been able to integrate them. At the end of his book, Hamblin has summarised the various techniques discussed under his five-level model. These are :
Reaction:Session: Reaction scales, reactions notebooks and participation, observers’ records, studies of inter-trainee relationships, end-of-course reaction form, post-reactions questionnaires and interviews, and expectations evaluation.
Learning: Pre-course questionnaires to instructors; programmed instruction; objective tests, essay-type written or oral examinations, assessment by trainees of knowledge changes; skill and task analyses, standardised tests of skill; tailor-made techniques for evaluating skill, assessment by trainees of skill changes; standardised attitude questionnaires; tailor-made attitude questionnaires; semantic differential scales; and group feedback analysis.
Job Behaviour: Activity sampling; SISCO and Wirdenius techniques; observers’ diaries; self-diaries with interview and questionnaires; appraisal and self-appraisal; critical incident technique; observation of specific incidents, depth interviews and questionnaires; open-ended depth techniques; and prescription for involving management in the training process.
Organisation: Indexes of productivity, labour turnover, etc., studies of organisational climate; use of job behavioural objectives to study behaviour of non-trainees; and work flow studies.
Ultimate Value: Cost-benefit analysis and human resources accounting.