Old-School Report Cards
Think back to elementary school report card day. Maybe it was one of your favorite days because you knew you could count on a long column of As. Or maybe you dreaded it because you knew no one at home was going to believe the dog had eaten your report card for the third time. I distinctly remember always getting low grades in Art because I never mastered coloring between the lines.
Letter grades are a type of learning metric, but what do they achieve? Other than being a barometer for a student’s overall success in an area of study during a specific period of time, they don’t do much.
But what if teachers had access to metrics and data that could help them intervene with struggling students?
But what if teachers had access to metrics and data that could help them intervene with struggling students in real time rather than assigning them stress-inducing scores after-the-fact? What if teachers could use data to identify a student who was likely to find Art challenging, pinpoint the exact mathematical concept a student was struggling with, discern what learning styles each student found most effective, or even have the foresight to support a student with an undiagnosed learning disability?
Predictive Learning Analytics
All these possibilities fall within the realm of predictive learning analytics, a burgeoning field in instructional design. As instructional designers, we have the ability to collect data that will predict future student success.
As Phillip D. Long and George Siemens write in Penetrating the Fog: Analytics in Learning and Education:
“A byproduct of the Internet, computers, mobile devices, and enterprise learning management systems (LMSs) is the transition from ephemeral to captured, explicit data. Listening to a classroom lecture or reading a book leaves limited trails. A hallway conversation essentially vaporizes as soon as it is concluded. However, every click, every Tweet or Facebook status update, every social interaction, and every page read online can leave a digital footprint. Additionally, online learning, digital student records, student cards, sensors, and mobile devices now capture rich data trails and activity streams.”
If we apply these higher education principles to employee training courses, we could use data of this kind to predict learner knowledge retention, motivation and aptitude.
Kirkpatrick’s Four-Level Training Evaluation Model
Although advances in predictive analytics are exciting, we are likely still several years away from accessible learning systems that can make accurate and relevant predictions. For one thing, while it’s relatively easy to access data, it’s much more expensive and time-consuming to analyze it efficiently. Many institutions don’t have the tools necessary to perform analytics on such a large scale. However, even though perfect predictive learning analytics is still in the future, we can take steps as instructional designers to ensure we are collecting the right kind of metrics and data about our learners.
Start by asking yourself what kind of metrics you are already capturing. If your company is like most, this might include: number of training hours learners complete, completion rate, and, if your training includes any kind of assessment, a percentage of correct vs. incorrect answers. These metrics are important, but they’re similar to report cards—they only measure a learner’s performance after-the-fact, and can’t predict success or identify needs along the way.
To move past these kinds of report card metrics, a useful starting point is Kirkpatrick’s Four-Level Training Evaluation Model. The four levels of this model are:
1. Reaction
This level measures how well the training was received by the audience. You can measure this with a survey. Possible questions might include:
- Did you feel this training was a valuable experience?
- What was the most valuable part of the training?
- What could have been improved about the training?
2. Learning
This level measures how much a learner’s knowledge has increased after training. You can measure this through multiple choice questions, performance assessments and skills tests.
3. Behavior
This level measures how well learners have applied the information they learned to relevant tasks. You can measure this with behavioral interviews, and on-the-job observations and assessments.
4. Results
This level measures the final results of your training. The results will vary depending on the objectives of your training, but general factors to measure might include increased productivity, safety and/or revenue.
Trainings won’t get better unless we pinpoint the areas that could use improvement. Applying the metrics found in Kirkpatrick’s model is a great place to start if you’re looking to expand the data you analyze from your training programs. How do you measure training?
Subscribe
Related Articles
EPISODE 4: SUCCESSFULLY NAVIGATING CHANGE
< Podcast Homepage Change can be anything. That’s part of [...]
Inspire Fresh Starts with Your Onboarding Learner Journey
It turns out that a great time to make a [...]
EPISODE 3: FROM RESEARCH TO PRACTICE
< Podcast Homepage It’s no secret that the L&D landscape [...]