According to Grant Wiggins, there are seven essential elements of effective feedback.
Feedback should be:
- tangible and transparent,
I recently looked at the new Common Core Math section of the Khan Academy. I’d like to analyze it within this framework of feedback.
Here is a sample exercise from the Khan Academy, related to the Common Core grade 3 standard of 3.G.A.1. Not familiar with this standard? Just follow the previous link; the language is identical to what is written on the Khan Academy website.
There are two places where goals are embedded within this question. The first is an implicit goal, described by the Common Core standard linked to this problem. The second goal is embedded within the problem; get five correct in a row.
An obvious issue here, given that the Khan Academy is mostly intended to be used independently by students (the definition of personalized learning the Khan Academy is apparently using), the language of the standard is probably not student-friendly, and in particular it does not lend itself well to be a goal for students.
Getting five questions correct in a row is clearly a goal that a 3rd grade student can understand whether or not they have achieved, although the question remains about whether five correct questions in a row means students understand the standard being assessed by these questions.
Tangible and transparent:
There are two main ways students get feedback from the Khan Academy exercises; they can check their answer, and they can ask for a hint. If students click on check answer, and their answer is wrong, the button they just clicked on shakes back and forth. Presumably this means the answer is not correct, but both of these feedback opportunities are opaque.
The box shaking back and forth does not tell the learner WHAT they did incorrectly, nor is a shaking button even explicitly a signal that something is wrong.
Next, feedback that is tangible is something a student understands. Look at the series of four hints students are given when working on this problem.
Now imagine you are a 3rd grade student, and you do not know what a right angle is, what is meant by 4 equal sides, or what a rhombus, rectangle, or square is. How does this feedback help you understand these key concepts or the connection between these concepts and the diagrams given. Finally, telling students that the correct answer is “none of these” is not helpful feedback because it does not respond to whatever thinking a student may have done about this problem, it just states something that should be accepted as fact.
Giving students the correct answer to a problem or telling them that they are wrong does not give them actionable feedback because it doesn’t give them any actions they can take in order to improve. A student who receives this feedback has no other alternative but to continue guessing until they are able to get the answers right. This does not lead to student understanding of mathematical concepts.
The key to user-friendly feedback is to give a small amount of feedback at once so that students have an opportunity to make use of the feedback and not be overwhelmed, but at the same time give them feedback that they know how to act on. Feedback that is constructed in language students may not know, or which requires them to use knowledge they do not have is not user-friendly. What evidence do the developers of the Khan Academy have of the effectiveness of the feedback they have included in their system?
What little feedback the Khan Academy exercises give, it is certainly timely.
Again, this feedback is ongoing. If students want another chance to try again, they just need to do another exercise.
The feedback offered by the Khan Academy is certainly consistent, in fact for any given exercise, it is likely virtually identical. This actually leads to problems, since some variance in the language used in feedback is often necessary in order for students to make sense of it.
This analysis of the feedback offered by the Khan Academy platform suggests that their mechanisms for feedback fail on four out of seven of the criteria shared by Grant Wiggins. One open question here; is it possible for a computerized feedback system to give feedback that passes all seven of these questions? Next question; what is learned from using a system that gives poor feedback?