Pages Menu
Categories Menu

Posted by on Jun 24, 2014

Pop Quiz: How Do You Evaluate Training?

140272216Recently, I’ve encountered several instances where agencies want to make sure they get what they pay for when training their employees. One method that I’ve observed is giving students a test at the beginning and end of a class. At first glance, this makes sense: Let’s make sure the employees are learning something when we spend our scarce training dollars.

To understand the limitations of the before and after test, let’s look at Laura, a GS-9 analyst with ambitions to move up to deputy program manager and eventually to program manager. Laura signs up for a three-day class on effective briefing and presentation skills. The morning of the class, she feels nervous, wondering “Will I have to give a speech in front of the whole class?”

Walking into the classroom on Monday morning, the instructor hands her a test. Laura finds a seat and looks over the test, which asks how she should analyze the audience when preparing a speech and to identify “bridge words.” She’s presented with four short lists of words to choose the right answer from; the words start to jumble together as she reads them. Memories of standardized tests in high school start to run through her head.

Laura does her best to answer the questions and hands her test in. Other students filter in, starting their tests. While waiting, Laura strikes up a conversation with Sam. The instructor asks them to be quiet so that the other students can focus on their tests. Glancing at her iPhone®, Laura realizes that it is almost 10 a.m. A whole hour has gone by as they wait for everyone to finish the test!

Finally, the instructor begins the class. Laura learns many tips and ideas for giving a great briefing. She even learns that “bridge words” are transition words, such as however, in addition, and for instance, that help the audience know that you are moving on to a new thought. On the afternoon of the last day, each student prepares a short talk. Laura feels confident and eager to try her new skills. Each student takes turns giving their talk. Laura receives a standing ovation for her talk.

At the end of the class, the instructor hands out another test. Instead of leaving with a sense of exhilaration that she nailed her talk, she leaves a bit anxious that she didn’t remember all of the key characteristics of effective presentations. Laura wishes she could ask the instructor a few more questions, but there is no time; the class is done and students filter out of the room.

This scenario highlights several drawbacks of pre- and post-tests, which are:

  • They take away valuable instruction time
  • They test students on material that they have not yet been taught
  • They often measure test taking skills more than knowledge

Further, pre- and post-tests are not good measures of a student’s skill or behavior. Laura could get a perfect score on her post-test, yet still not be able to give an effective presentation.

But, wait! Isn’t it possible that, for whatever reason, Laura did NOT learn what a “bridge word” was? Don’t we need to know if the instructor did a poor job teaching? Or that the materials didn’t clearly explain certain concepts? Or that Laura just daydreamed during the class? Yes, that is possible. That’s why we ask students to fill out an evaluation at the end of the course. The end-of-course evaluation is a way to get feedback on whether the course materials and instructor helped the students learn.

We also rely on the instructor to make sure that students are learning. Good instructors do this naturally. As they explain a new concept, they look around the room for puzzled faces. They ask students questions to make sure they understand. They take an extra couple of minutes to explain a difficult concept to a student during the morning break. They use facilitation techniques to keep students’ attention.

A better way to show the value of training is to conduct a follow-up evaluation.  A short evaluation can be sent to students some period of time after the class, asking them to indicate whether they have used the skills they learned in the training. A well-designed follow-up evaluation should take only a few minutes of the student’s time.

A follow-up evaluation also can identify obstacles that students encounter in trying to apply their new skills. For example, Laura’s request to her supervisor to present a new policy in the monthly division meeting is denied because the supervisor always explains policy changes. This obstacle can be identified in a follow-up evaluation and brought to the attention of the division chief, who could remind supervisors that they are expected to delegate some presentations to their staff.

Stepping back from our example for a moment, it’s helpful to remember that the purpose of training should be to teach new skills that the agency needs to better achieve its mission. As training expenses continue to be closely scrutinized, showing that employees use the training to more effectively do their job will have much more impact than saying that students improved their test scores by 15%.

Designing an effective evaluation program can be a complex undertaking. Whether you need to refresh your end-of-course evaluations, start conducting follow-up evaluations, or design a comprehensive training evaluation program, it is possible to show that training is having an impact on your agency’s mission.

If you are a learning professional, do you have any tips about how to convince management about training’s impact? If you are in a management role, what evidence do you want to see to convince you that training is working?

Post a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>