We are very keen that our work is properly evaluated. Not only do we learn from the process but it is also an excellent way of demonstrating value for money and return on investment.

Sadly, most workplace learning and development escapes rigorous quality and financial evaluation. We think that’s a huge missed opportunity. We place enormous emphasis and priority on evaluating the quality and effectiveness of our programmes and we encourage our clients to hold us to account. As a rough rule of thumb, we recommend that 10% of the budget for a specific L&D intervention be spent on research and programme design, 80% on delivery and the final 10% on evaluation and assessment. If you’re looking to achieve measurable performance improvement and a quantifiable return on investment – and we think you should be – then you really do need to measure that improvement and quantify that return. A ‘before and after’ 360º review can be an invaluable way of measuring the effectiveness of a development programme: see more on that here.

But one can’t wait until the end of a programme before gauging its effectiveness. All good trainers crave feedback and we use all participant and client feedback, whether formal or informal, given during, at the end of, or after any programme, to ensure continuous enhancement and updating of materials / content / style / approach. In the case of structured development programmes, we also recommend that participants complete a short analysis of their learning at ‘flashpoints’ within the programme. This analysis is drawn from their ‘learning diary’, a record of their progress with their learning objectives throughout the programme. The ‘learning review’ can link into their business project (if that’s an integral part of the programme) and also gives both the client and us an indication as to how the learning is being embedded back in the workplace.

Our approach to evaluation and assessment

The key to evaluating the success of any training course or development programme will always be identifying and understanding ‘success’ from the very start. Most programmes have a set of desired outcomes and outline objectives, so part of our initial project consultation includes working with key stakeholders to progress these into hard measures of success or business drivers. We also ensure that there are clear links between each module’s objectives and organisational goals.

Our structured approach is based on continuous evaluation throughout the programme at four key levels: Reaction, Learning, Application and Results. This is a tried-and-tested model (which you will recognise instantly), but set out below are some of the specific techniques we use to make it work.

  1. Reaction – measure what participants think or feel about the training
    • What? Evaluate participants’ opinions of the training received, including pre-course work, content, materials, methods, activities and trainers.
    • Why? To ensure the training achieves its overall objectives and outcomes as well as to improve future training events.
    • How?
    • – Conduct verbal reviews and obtain written feedback.
      – Produce a post-module report covering details of how the module was received (eg, process, content, delivery, style) and recommendations for improvement.
      – Utilise a variety of methods to improve the range and quality of feedback on an ongoing basis and after events. These include anonymous informal feedback cards; self-managed review groups and delayed email reviews – formal and informal.

  2. Learning – measure what the participants learned from the programme
    • What? Evaluate the level of knowledge and skills that have been acquired and retained from the programme.
    • Why? To ensure the training achieves its overall objectives and outcomes as well as to improve future events.
    • How?
    • – Individual learning plans can be used throughout each module in order to ensure that the training content is translated into personal learning.
      – Use of multiple-choice questionnaire completed after six months to follow up on the learning retained. This information is compared to that taken before the event.
      – Participants asked to set specific objectives for a project they are currently undertaking.

  3. Application – measure the effect of the programme on job performance
    • What? Evaluate the extent to which participants are doing things differently back in the workplace.
    • Why? To ensure the training achieves its overall objectives and outcomes as well as to improve future events. Understanding how behaviour has changed, or why it has not changed, to enable us to make relevant improvements to the module.
    • How?
    • – Review progress against individual development plans at each peer coaching session and appraisal session.
      – Use structured questionnaires in general performance management reviews to identify areas of performance improvements resulting from the training.
      – Use of 360-degree feedback to gain evidence of behavioural change.
      – Review of specific objectives previously set to apply learning to a current work project.
      – Ask participants to write a ‘2U4U’ letter at the end of each module that summarises 3-4 actions they will commit to over the following 8 weeks. The letter is retained and posted to them at the end of the 8-week period, ensuring they become responsible for applying their own learning.

  4. Results – measure the effect on organisational performance
    • What? Measure the overall effectiveness of the programme on business performance.
    • Why? To measure and demonstrate return on investment as well as value added to the business.
    • How?
    • – Our approach to measurement covers five stages:
      1. Prior to commencement of the programme, identify the key business drivers and measures of organisational performance that will be used for assessment.
      2. Capture these measures and results prior to the programme commencing.
      3. Agree how long the training takes to affect each measure (ie, how long it will realistically take for changed performance to show up for that particular measure).
      4. Re-assess measures at appropriate points during and after the programme.
      5. Identify the factors that might affect the measures and consider methods for minimising them (eg, comparing results to other parts of the business where similar training has not taken place).

    In addition, individuals prioritise and commit to development actions. These link through to the key business drivers, thus enabling the link between individual actions and return on investment.

    To talk through the practicalities of training evaluation and assessment, please just give us a call on 01582 714280. It’s good to talk!