The Training Marketplace

How to make sense of feedback you get from learners (and why bad feedback is the best feedback) 

Adrian Ashton

It’s a familiar ritual at the end of any training session: “please fill out the feedback forms – they help us understand how you’ve found the learning today” (or some such similar bland recital).

And what people write on them is usually rubbish:

-People are desperate to leave now that the session is completed, and feedback forms are always optional, so most people won’t do them (or at least, won’t pay much attention to what they’re writing on them if they do).

  • People have usually spent the last few hours sharing insights, stories, and knowledge with each other collectively and openly. So, to now be asked for their thoughts privately, means that they’re fast-forwarding to the next occasion when they’d likely be talking about their experience of the leaning in a similarly private way. Which means that they suddenly disassociate from what and how they’ve been learning, and gone into a polite small talk mode (which in Britain usually defaults to ‘very well, thank you’, whenever anyone asks us anything, regardless of how we really feel).

  • In light of the above, people will find that in also not having had any meaningful opportunity to reflect on either their learning, or the experience of it, that they’ll instead talk about their personal experience of the day on the feedback form (which is highly subjective).

  • If someone feels aggrieved that they didn’t get the chance to say everything they wanted to, disagreed with how someone shared an idea, etc, then they’ll use the feedback from to express this (in the equivalent way to a ‘keyboard warrior’ ‘attacks’ someone else’s social media post, that they took personal issue with); which means it can’t be taken as a sign or reflection on any actual quality issue in how the learning was designed or delivered.

And all of this means that its actually very easy to manipulate people into writing what you want them to on these pages – in the past, I’ve experimented with this idea:

  • when passing around the feedback forms, I’ve openly said to people “if you’re not sure what to write, write excellent for everything”. And when I have, positive feedback usually jumps by about 10-15% from what might otherwise usually be expected for the same session.
  • Similarly, I’ve found that having a better quality of biscuits and coffee, and ending the session early, can similarly elicit a higher level of reported satisfaction from learners (irrespective of how good the content of the session actually was).

So instead, when engaging in this end of session activity I now take a different approach:

  1. I introduce the concept of feedback, and explain how I use it – including citing examples of where I’ve made changes to sessions and their content based on what people have said about them previously. This means that people are more comfortable and confident to be critical in their feedback, which in turn means that I can use the feedback more constructively in reviewing session content and design to make sure that it really is as good as it can be for future learners.
  2. I invite everyone to openly share a brief reflection on how they’ve found the session in 1-word (or similar). This helps people start to better reflect on their own experiences, before committing feedback to the form.
  3. I try and understand and note what wider contextual factors have been impacting on learners during the session (for example – I once ran a course where no-one attending had wanted to be there, but had been forced to be by their managers. Unsurprisingly, feedback showed a lower than usual level of satisfaction, but not perhaps as low as should have been, based on everyone being initially resistant to being in the room. I’ve also led sessions that where constantly interrupted by builders; where there were no windows for daylight; and in one instance, people started a fight between themselves – and all of these meant the reported feedback on the learning was higher or lower than might otherwise have been).
  4. And I benchmark feedback through a rolling total average of all courses I deliver, as well as against external standards, to spot where such external factors as illustrated above may have created an oddity in the scoring and so can be more easily accepted without needing to act on them, and where there may be signs that something exceptionally good or unexpected is happening…

Adrian Ashton is a freelance trainer, and an associate with the international Co-operative College, Anglia Ruskin University, and other Universities.

He has designed, reviewed, and delivered a range of training and learning programmes for over 30 years, as well as supporting peers by leading ‘train the trainer’ courses for both in-house teams as well as wider sector advisors.

Partly because of his work and approaches around ensuring inclusivity in how people can fully engage with, and access a range of learning on different topics, and in different formats, he was recognised as a UK Progress Champion for diversity, equity, and inclusion in the Gamechangers global awards in 2022; and offered a life fellowship of the RSA in 2024.