Before handing out your course evaluation form next time, stop! Think! When was it last reviewed? Is it really getting you meaningful ROI data and insights into driving up the quality of future training interventions? We share 5 invaluable tips to revitalise your feedback form.
Reading time: 6 minutes
Before reaching for your go-to course evaluation feedback form, ask yourself if it’s really fit-for-purpose. Are you collecting only vanity metrics to make the training look good? Or does your survey collect actionable data for improvements?
And what about your L&D team. Are you continuously learning? Think about when you last updated your feedback form design. Think about when you were last excited at trying out new methods in evaluation.
Follow these 5 tips for effective course evaluations, inspired by a recent review of Corporate English Solution’s evaluation systems.
Tip 1: Design out bias
Have you ever realised that the order of your survey questions may build in bias? Question-order bias occurs when the respondent is influenced, negatively or positively, by the preceding question.
All your hard efforts in designing and collecting the data may come to nothing if answers are invalid. Happily, this design flaw is easily preventable. Consider these options:
- Provide a summary of the training content and objectives at the top of the form to remind participants.
- Arrange questions randomly. If questions are clustered into similar areas this may cause the respondent to answer in a similar way to all questions.
- Ask for overall satisfaction levels after asking about specific aspects first. Studies found that when asked about overall satisfaction before other questions, satisfaction levels were far lower.
- If you need to cluster themed questions, review for potential question-order bias.
Tip 2: Design out misinterpretation
The old ways, in this case, aren’t the best. The latest thinking on workplace evaluation reveals that traditional Likert scales, agree-strongly agree choices, can be misinterpreted. Often the distinction between choices is unclear for the respondent. They may hesitate or feel response fatigue, or become ‘tick happy’, rushing to the end without any real thought.
And you’re left with data that is unreliable.
Even the way the questions are worded adds to your problem of not getting actionable data. We’re all familiar with the positive sentences on a feedback form such as I achieved my learning objectives. This format doesn’t allow respondents to make their own decisions. It certainly doesn’t tell us what or why something was successful or not. And numerical scales, similarly, don’t give us a detailed picture.
It’s time to make feedback forms feedforward.
Consider these options:
- Write descriptive answer choices to help participants make more precise decisions. Example: To what extent was the course content at the right level of challenge?
a) The content was generally too difficult.
b) Some content was too difficult.
c) The content was at the right level of challenge.
d) Some content was too easy.
e) The content was generally too easy.
- Avoid asking about two aspects in the same question such as The training was engaging and useful. The response will not clearly indicate which part participants are responding to. Focus the question on a single specific aspect.
- Add open questions if you have to include a Net Promoter Score rating question. This will help the respondent explain the reasons for their rating and provide you with more accurate and actionable data. Respondents’ narrative responses provide more engaging and memorable stories which communicate training ROI more persuasively to stakeholders. They can also be used as testimonials for future course promotion.
Even though descriptive scales may take longer to write, they take the same time to answer as other scaled response questions. The upside is that descriptive scales help participants make more precise decisions, resulting in responses less open to misinterpretation or bias.
Tip 3: Design for ease
If we’re honest, we’ve all felt frustration, trapped inside an online evaluation maze. Help respondents break free to a happy completion by providing choices and building in autonomy.
Consider these options:
- Offer choice of response format (online link, electronic document).
- Allow time during the training to complete the course evaluation form.
- Keep the survey concise, aiming for maximum 10 minutes’ completion time and use mostly closed questions.
- Create the option to skip some questions to design out respondent resentment and increase completion rates.
- Ask questions that respondents can answer now, not ones based on future predictions, such as the extent to which their skills will improve. Avoid overly technical questions such as those about the training methodology.
Tip 4: Design for relevance
Ask yourself: Why would you and your teams continue to do the same thing if you know it doesn’t get the results you want? It’s vital for you as an organisation to know beforehand what priority areas to measure. It’s vital, too, for the participant to see that their feedback matters.
Consider these options:
- Identify priority areas for feedback.
- Design a feedback form based on questions on your priority areas only.
- Identify what you are genuinely prepared to change in case the feedback reveals areas for improvements.
- Update respondents, where possible, on the survey results and how they are being actioned.
Tip 5: Design in metrics of success
Standardised scale responses are often chosen for course evaluation surveys for reliability. Answers are simple to analyse and present and can also be reproduced consistently. Sounds ideal.
But ask yourself: Are you and your teams satisfied with the data you get? Sticking to standardised scales, you stick with data that’s not actionable.
Surely, more descriptive answer choices are more difficult to analyse, right?
Wrong.
Build a traffic light system into your course evaluation to be clear where success and improvements lie.
Consider this option:
- Assign a standard of acceptability to each answer choice which will indicate the standard such as excellence, adequacy, or flag for improvement.
- Include open questions to boost the quality of specific data. For example, allow respondents to expand on scaled responses or provide general comments.
Make your training evaluations fit-for-purpose. Focus on the opportunities for improvement and harness the latest methodology in evaluation design for impact. Know what you want to measure and what you can action. Use descriptive scales for actionable and persuasive data. And don’t forget to communicate the changes you make thanks to your stakeholders’ contributions.
Find out how we can support you to build a customised learning programme, track progress and report on impact.