The Power Law is one of several calculation methods available to you within JumpRope as a way of representing student mastery on a specific standard. As more teachers, schools, and districts adopt the Power Law as a way of representing student mastery, I thought I'd spend some time putting together some resources that can help explain the Power Law and make it more useful in the classroom. It is a powerful, research-based method of answering the question "based on the evidence, what is the student's mastery at this moment?" but it has some complexity and a few quirks.
Before I begin, here is a quick reference:
Assessment #1 |
Assessment #2 |
Assessment #3 |
Assessment #4 |
Power Law Score |
|
Student #1 |
1.00 |
2.00 |
3.00 |
4.00 |
4.00 |
Student #2 |
1.00 |
3.00 |
2.00 |
4.00 |
3.66 |
Student #3 |
2.00 |
4.00 |
1.00 |
3.00 |
2.16 |
Student #4 |
4.00 |
3.00 |
2.00 |
1.00 |
1.28 |
Advantages
- It responds to growth and does not penalize students for low scores early in the learning process. At the same time, it is not as "harsh" as the Most Recent calculation and incorporates consistency and trends as opposed to just the most recent score.
- It's backed by research. JumpRope didn't make up the power law - rather, we implement an algorithm that is based on research into the cognitive aspects of learning. In other words, both we and you can justify its use to others where necessary. Want to arm yourself to the teeth? Check out Marzano's book: Transforming Classroom Grading
- It is generally intuitive. If we were to ask you to look across all student scores for each standard every time you added an assessment and come up with a number to represent their "current" level of mastery, most teachers most of the time would generally agree with the Power Law. In other words, it arrives at the same/similar outcome with a lot less work on your part. I know that this doesn't apply to every teacher in every situation, but I've spoken with many teachers at many different grade levels and in many content areas and to a large extent there is intuitive agreement.
Challenges and Solutions
- It is difficult to explain (for example, to parents and students). Basically, it's an enormously-complex algorithm (see the attached Word document for more on this) and it's exceedingly difficult to reproduce the mathematics behind it to figure out exactly where the number came from.
- Solution 1: use the chart above or some version of the document attached below to explain the power law intuitively - do NOT try and use the algorithm in most cases as it tends to cause heads to explode. In our experience, if you are able to have the conversation objectively (e.g., without the emotion surrounding a students' specific scores), most students and parents will agree that student 1 and student 4 "deserve" different overall scores in the scenario in the chart above. Once you've made that point, it is fairly basic logic to explain that if the scores were averaged then every student would have a 2.5, so you use an equation that accounts for their growth.
- Solution 2: I strongly recommend explaining it to students and parents before they first see their "grades." If you find yourself explaining it for the first time with the student's own grade in front of them, the conversation is often too emotionally charged to make logical arguments.
- There are some quirks. Certain scenarios can lead to some counterintuitive scores emerging from the Power Law. A classic example of this is the situation where a student scores a 3, 3, 3, followed by a 4 would get a 3.6 in the chart above; this is actually a lower score than a student that receives a 1, 2, 3, followed by a 4. Why? Well, the Power Law has less evidence of growth and thus statistically responds with a value that represents the "lack of confidence" in the student's ability to acquire the knowledge or skill. I know, it bugs me too...that's why I'm bringing it up!
- Solution 1: In some cases, it may make sense to override this score. Most JumpRope setups allow you to override student scores (depends on the settings at your school), just make sure that this is the exception and not the rule.
- Solution 2: If early assessments are formative (and/or scaffolded), consider representing that in JumpRope. In other words, did that first score truly mean that the student "met the standard" vs. "did what they were supposed to do?" The Power Law gives us the incredible opportunity to shift our feedback towards standards, and this is actually an example of how giving students lower scores on formative assessments (assuming they truly are at a lower level of mastery) actually *boosts* the student's final score on the standard instead of punishing them. It may sound a bit crazy, but I personally think that it makes sense in the context of mastery-based grading and it is what the Power Law was based upon.
- Solution 3: Play around with it and discover its quirks and ways around them. We've built a simulator that lets you easily plug in different score scenarios to see what the power law will spit out. Basically, you can mess around with row 3 in this document.
- It does not account for teacher-assigned assessment weights. This is the most difficult bit for many teachers: you don't have control over the weight of each assignment towards the final grade. In essence, you can think of the power law as automatically assigning weights to the assessments based on the order in which they were given (it's a bit of an oversimplification, but it captures the essence of it). The reality is that the power law was simply not designed to work with teacher weights and for us to support weights would mean modifying it and losing the "research-based" aspect. The Decaying Average calculation type is our attempt to build in the construct of growth-based calculations while also accounting for weights, but it does not work with the Power Law.
- Solution 1: Assess often. Effectively, the number of times you assess something becomes your new technique to "weigh" things. Consider, for example, scoring each draft of a paper separately or a test re-take as an additional assessment in JumpRope instead of editing the existing score. These strategies work well with the Power Law and effectively make the paper and the test (in these examples) "worth more" towards the overall standard mastery.
- Solution 2: Use the special weight of 0.01. If you give an assessment a weight of 0.01, it has a special property with regards to the power law: it will "count" like a normal assessment until you score at least one assessment with a weight of 1 or higher, at which time it will still show up on reports but will be completely thrown out of the power law calculation. It's basically a trick that we built in to let you score formative assessments (weight = 0.01) during the learning process and then to later give a summative assessment (weight >= 1) that "trumps" the formative. It will still spit out a score early on but the summative assessments will be the only scores that count in the end (and it works with multiple weight >=1 assessments). Contact us if you are interested in learning more about this trick. By the way, an assessment with a weight of 0 (zero) will not count at all in the Power Law either. These are the only two exceptions to the weight being ignored.
- Solution 3: Give up some control and let it do its thing. Don't take this the wrong way - no judgement is implied. I'd be remiss, though, not to bring up the fact that attempts to exert hyper-specific control over student overall grades where the power law is being used are going to mostly be exercises in frustration. It works best in a world where we have shifted focus from the overall grade toward a focus on student mastery of specific skills, the power law works quite nicely most of the time. It's also useful to remember that all grading systems have quirks and weaknesses.
- There are a couple of special cases to be aware of.
- If you have fewer than 3 scores on an assessment, the Power Law doesn't work mathematically. JumpRope just shows the most recent score if you have 2 scores for a target, and shows the only score if you've only entered 1.
- You need to avoid having two assessments on the same date. If both occur on the same date, only one will be factored into the score because JumpRope cannot determine the order in which they were given. Consider simply fudging the system to move the date on one of the assessments and/or pre-combining the scores (think entering an overall "presentation and paper" score instead of an individual presentation and individual paper score with the same due date.
There we have it - more detail than you ever wanted on the Power Law! I hope that these resources and tips will help you use it more effectively in your classroom. Remember that the Power Law is one of several calculation types that JumpRope supports toward the goal of accurately representing student mastery.
Comments
0 comments
Article is closed for comments.