7 ways to do digital assessment
June 17th 2021 | By Stephanie Karaolis
At Solvd, we don’t believe traditional end-of-course tests are a good feedback mechanism for online experience design. But assessment and feedback do have their place.
When someone has a clear goal in mind (being better at their job, or improving a particular skill), they want to know where they are improving. The gaming industry does this well, using feedback to signal progression or completion of stages.
Other organisations, having only recently moved their people development offerings online, are still working it out. Where certification is a key selling point (like universities), being able to provide feedback and check participant understanding online is key.
The ideal might be individualised feedback from an expert instructor, but cost and availability mean it’s not often possible. So, we’ve compiled some alternative strategies - from fully automated to fully ‘human’ and suiting a range of budgets and situations. Often the best approach is a combination of two or more of these.
1. Multiple choice quizzes: A tried and tested strategy for instant, automated feedback.
These have limitations: they test theory better than application, they’re not suitable for skills assessment, and poor design can mean they’re just short-term memory tests. But they’re simple and inexpensive and can be an effective way to check knowledge and understanding.
2. Deliverable-based comparisons: A good option if participants create something.
This strategy allows them to see how their work compares to best practice examples and other participants’ deliverables. Asking people to infer feedback themselves isn’t always appropriate, but can work well combined with other strategies. There’s no live, synchronous requirement and the only cost is in creating the examples, which can serve a longer-term purpose as workplace reference materials.
3. Peer grading and feedback: The lowest cost way to give direct human feedback.
Asking participants to do the ‘heavy lifting’ makes sense for cohorts of hundreds or more, like MOOCs, and reviewing and grading other people’s work can be a good learning experience in itself. The challenge is the variable quality of feedback given (a big cause of high MOOC dropout rates). So investment of time and money is needed to put systems in place to ensure consistent grading and quality peer feedback
4. Self-grading: One of the most financially and logistically appealing options.
Having people reflect on and evaluate their own work can be a worthwhile activity, with the right incentive and support. Asking them to reflect on a model answer versus their own work isn’t enough. People need to be motivated to take the self-assessment seriously, and then have the right support and examples to help them recognise what good looks like.
5. AI-assisted grading tools: Improved efficiency for very large participant groups.
For now, AI-assisted tools can automate parts of the process, for example by summarising and grading work, or duplicating feedback to everyone giving the same response. But they can’t (yet) eliminate the need for human input: upfront to create variations of feedback, or for ongoing oversight of grading quality, and so on. Currently, these tools are a big investment only suited to very high participant numbers and standard answer assessment formats.
6. Alumni grading and feedback: A flexible pool of people giving direct feedback.
Alumni aren’t experts, but they still bring a level of understanding and insight. It’s also easier to build a large pool of alumni, offering greater capacity and availability than a group of expert instructors and assessors. An incentive (monetary or not) can help motivate alumni involvement, and someone needs to manage the process and monitor the quality and consistency of alumni feedback. Even so, it’s much more affordable than equivalent input from experts.
7. Expert instructors: Maximum value from short bursts of feedback.
Experts are expensive, with limited availability, and it’s hard to scale their involvement up to large cohorts. But short bursts of live feedback give participants the benefits of expert input without extortionate costs. Imagine giving teams five minutes to pitch an idea or deliverable to a Dragons’ Den style panel of experts and getting immediate, high-quality feedback on a small group basis. This kind of thing makes an online programme feel more premium, too.
More Solvd Insights
We use interviews in the Discover phase of design thinking and 5Di to explore a problem in greater depth. This guide provides practical advice on preparing for and conducting a good discovery interview.
January 22nd 2021 | By Charlie Kneen
Regular, honest feedback is the foundation of performance management, and a common feature of high performing, motivated teams. Employees also crave regular feedback but unfortunately managers often put off it off, or are unclear when they give it.
January 13th 2021 | By Charlie Kneen
Company values done badly can reenforce groupthink, crush innovation and reward the wrong behaviours. I think this was true in my first L&D role and it’s a mistake I’m not keen to repeat.
November 20th 2020 | By Charlie Kneen
A human-centred approach to learning design can be difficult to justify to colleagues that are used to taking a more conventional approach. Here are 9 reasons to help you explain why speaking with your audience is worth the effort, if you're challenged by less enthusiastic stakeholders.
December 3rd 2020 | By Charlie Kneen
Bees are not just our pollinators and honey producers; they also have much to tell us about the importance of innovation in businesses.
October 19th 2019 | By Charlie Kneen