You know training evaluation is essential and have tools to carry it out. But how engaged are your stakeholders in the process? We talk to Kim Beadle, our evaluation and measurement expert, to find out how to take a more strategic approach to programme evaluation.
Reading time: 8 minutes
Why should we invest in another training programme? Will our teams really use what they learn? How will training result in business impact? Just some of the thorny questions L&D professionals regularly face and need to answer convincingly.
With organisations increasingly investing more in upskilling employees, proving it is paying off is crucial. However, less than 20% of organisations say they have the analytics capability to effectively measure learning. This makes answering senior L&D managers and business stakeholders very tricky.
Kim Beadle, Corporate English Solutions’ evaluation and measurement expert says, “It’s time to re-evaluate training evaluation. By continuing to use our old methods, we may be missing opportunities to influence and engage stakeholders. This applies not only to the training itself but the training evaluation.”
To really influence and engage stakeholders, it’s important to take a more strategic approach and make training evaluation a collaborative process. Read on to discover how to use a strategic process and actionable tips to help you answer even the tricky questions, convince stakeholders and engage them in the process.
1. Use training needs analysis to clarify evaluation aims
Who is the audience for this training evaluation? What’s in it for them?
Every L&D team’s toolkit includes training needs analysis. We regularly identify role-related skills, carry out mapping to surface gaps and develop objectives of training. But have you ever used training needs analysis to identify the aims of the programme evaluation itself?
Ask yourself who the audience for this particular evaluation is. Training participants? Business managers? External vendors? The L&D team?
Kim advises “By identifying the audience, you will be able to connect with them during the training needs analysis phase to find out more about their needs and expectations from not only the training, but also its evaluation.”
Different audiences will have overlapping needs:
- Course participants want reports on their own skills and performance improvements and gaps, along with advice on how to continue learning.
- Business managers also need data on their teams’ skills and performance improvements and gaps, specifically how this will impact business performance. If they provide training budget, they will also be keen to learn about their return on investment. They may also be interested in how they can evaluate their direct reports’ performance and support them going forward.
- External vendors and internal delivery teams need to know how successful their training was to be able to make improvements to learning design and delivery and be selected for future projects.
And of course, the L&D team needs data on all of this.
Once you are clear about the needs of the different stakeholders, you’ll be able to define the aims of your training evaluation. However, having too many aims can be overwhelming and unachievable. Prioritise and be strategic about areas you select for evaluation.
2. Use training needs analysis to gain stakeholder buy-in and create collective responsibility
We’ve all experienced the challenges of getting stakeholders to complete training evaluations, read our reports and listen to our presentations. But it doesn’t have to be so difficult.
Along with clarifying the aims of your course evaluation, the Training Needs Analysis phase is an excellent opportunity for you to build rapport, gain the trust of and influence your training stakeholders.
Kim highlights the value of this: “By involving them in the process, listening to their concerns and really understanding their needs, you will find it easier to create shared aims for the evaluation and gain stakeholders’ agreement on the priorities.”
As you reach agreement, it’s also important to clarify who will be responsible, accountable, consulted and informed (RACI) at each stage of programme evaluation development and delivery. Use RACI matrices and circulate, discuss and confirm these before designing the evaluation.
3. Evaluate the programme, skills and learning throughout and communicate results
When should you evaluate training?
Organisations often rely on immediate post-training evaluations to gather learner feedback and measure learning and impact. However, depending on this data alone will miss valuable opportunities for insight and may fail to engage stakeholders.
Kim recommends: “By analysing skills and capability levels as well as gathering learner and manager input before, during and after the programme, you can maintain stakeholder engagement and adapt your measurement and evaluation if needed.”
Send surveys and hold brief catchups with key stakeholders to present results and make recommendations at intervals during and after learning.”
4. Select and develop evaluation tools to meet your aims
So, what’s next in the training evaluation strategic process?
Now you are clear about your aims, have gained stakeholder buy-in and accountability and identified when you’re going to evaluate, you’re ready to select or develop evaluation tools.
But can’t we just use the same tools we always use?
You can, but they might not be the best tools to meet the aims of this programme evaluation. Kim suggests, “Review your data sources to make sure you have a comprehensive set of tools to meet all the evaluation aims, including skills and performance improvements, ROI and learning design and delivery.”
Do your assessments and tests provide a true picture of learners’ knowledge and skills?
Kim says, “Don’t rely on formal tests alone: while many business managers value their results, they don’t always demonstrate learners’ ability to use the skills and knowledge in their work. Remember that behavioural change leading to business impact will take time. Factor in assessments before, during and post-training.”
Consider a combination of measures such as:
- formal, self and peer-assessments
- learner interviews
- work review checklists
- action plan monitoring
- on-the-job observations by peers, managers or trainers
- 360-degree feedback
- mentoring or coaching schemes that factor in skills evaluation
These measures provide a richer picture of performance, detailed evidence to support it, and higher levels of engagement from all stakeholders.
Are your programme evaluation surveys reliable and valid, leading to actionable data?
When gathering feedback on content, learning design and delivery, surveys or “happy sheets” remain the most popular tool. But Kim warns against the dangers of using vanity metrics, data designed to make the training look good. “Questions such as “rate the usefulness of the training from 1-5” may make for attractive graphs and positive messages, but how do your respondents know the difference between a 3 and a 5 rating? And how can you interpret their answers?”
For advice on how to create reliable, valid, actionable training evaluation surveys, read our blog 5 Tips for designing reliable, valid, actionable course evaluation surveys.
Parting thoughts
On a final note, Kim says, “If the training evaluation is ongoing, your evaluation approach and the tools you are using should be too. Are they working? Are you getting actionable data and evidence of improved performance? If not, what should you change? Ensure you keep up with the latest models and approaches.”
Discover our four-step approach to ensure targeted, relevant learning solutions that deliver business impact for your teams.
Find out more about how to communicate training ROI by reading our blogs:
- How to use storytelling to persuasively communicate ROI to stakeholders
- 3 reasons why strong business writing is good for your business