Mitigation in writing

The purpose of this post is not to explain the mechanics and issues of the KS2 writing progress measure – I’ve already covered that in enough detail here – but I do want to offer a method to help schools counter the nonsense. Suffice to say, the measure is badly designed, imprecise, and has a habit of making things look a lot worse than they are. In the absence of fine graded test scores, we only have the teacher assessments to work with. Rather than decide that this is insufficient data for an accurate progress measure (is there any such thing?), the decision was taken to assign nominal scores to the teacher assessments as follows:

Working towards the expected standard:  91
Working at the expected standard: 103
Working at greater depth: 113
These nominal scores for writing are then compared against fine graded, scaled score estimates – e.g. 96.69 or 105.77 – just as they are in reading and maths. Unfortunately, unlike reading and maths where pupils actually have test scores, in writing there is no such thing. The benchmarks that pupils’ results are compared against are therefore unachievable, and the big leaps between nominal scores result in some huge negative and positive differences. 
So if schools have lots of pupils assessed as ‘working towards’ they tend to have big negative progress scores; if they have lots assessed as ‘working at the expected standard’, progress scores tend to be positive. The odd thing is that many, if not most, pupils assessed as working towards, who have negative progress scores, have benchmarks of greater than 91 but less than 103. This means that if they achieved the expected standard (and were assigned a nominal score of 103) their progress would be positive. And this got me thinking: surely all such pupils are actually in line, aren’t they? They achieve working towards, they have a score of 91, their benchmark is 97.26 – which they can’t achieve – and the next available score is 103. That, in my mind, means their progress is broadly in line with average. They’ve done what you’d expect them to do from their particular start point. They can’t hit the benchmark, they can only fall short or exceed it. 
To counter the negative (and significantly negative) scores many schools have found themselves saddled with for KS2 writing, I propose the following approach for individual pupils: 
Above: positive progress score
In line: negative progress score but estimate is lower than next nominal score threshold
Below: negative progress score and estimate is higher than next nominal score threshold
For pupils assessed as working towards, this works out as:
Above: positive progress score
In line: negative progress score but estimate between 91-103
Below: negative progress score and estimate above 103
In one school recently I suggested a slight variation, which acts as a compromise to DfE methodology:
Above: positive progress score
In line: negative progress score and estimate 91-100
Below: negative progress score and estimate 100-103
Well below: negative progress score and estimate >103
Pupils were then colour coded dark green, light green, orange, and red respectively. They only had one ‘red’ pupil.
Once you’ve chosen your approach, simply state the percentage of pupils that fall into each category. If you like you could present it in a progress matrix from KS1 writing level start point. 
I may add this approach into my VA calculator at some point, but in the mean time you can do it manually from the KS2 pupil list, which you can download from RAISE. I definitely think it’s worth exploring. Chances are your it’ll make your writing data look better. 
It’ll be fairer too. 
© 2024 Sig+ for School Data. All Rights Reserved.