How will KS2 progress be measured next year when we run out of KS1 levels? (and is it even worth it?)

This is the question I get asked these days (along with ‘if I use this code, does that mean this pupil will still be included in our results?’ but that’s for another day). I have tackled it before here, but thought it was due another visit.

Four years ago we had two progress measures: levels of progress (or ‘expected progress’), and value added. Everyone focused on the former because a) floor standards, b) targets, and c) easy to understand. Value added played second fiddle to the levels measure because, let’s be honest, very few people understood what 98.3 actually meant; and so it was ignored. Unless it was blue (Panic!) or green (Party!), in which case it wasn’t. The crazy thing is that the two progress measures would often contradict one another – it was entirely feasible for every pupil to make ‘expected progress’ and yet have VA scores that were significantly below average. Translates as: All your pupils have done exactly what we expect them to do and it’s nowhere near good enough. I think that many people believed value added to have something to do with the number 12. It didn’t.

But that’s history. Since 2016, levels have gone and we have a new progress measure. Except we haven’t. It’s still value added and calculated in much the same way as it was previously: pupils’ attainment score at KS2 is compared to the national average attainment score of pupils in the same prior attainment group (PAG). The methodology has been tweaked but the concept remains the same. These days we have 24 prior attainment groups based on pupils’ KS1 average point scores, and these days that KS1 average score involves a double weighting of maths. This does result in some oddities, e.g. pupils that were 2b in all subjects at KS1 ending up in the same prior attainment group as pupils that were Level 1 in reading and writing and Level 3 in maths. Most people find that a bit nuts.

But the concept remains the same. It all stems from that average score at KS1, which defines the pupils’ prior attainment group, and therefore defines their progress score at KS2; and we calculate KS1 average scores using the well established point score system for levels and sublevels. Where do those point scores come from, by the way? Why is a 2b worth 15 points? What is the basis for this? Anyway…….

The issue is: we only have one more year group with levels from KS1 – the current year 6. The current year 5 do not have levels, and more and more people are now asking how progress will be measured next year. It’s raised at pretty much every meeting I attend. Considering the current measure is based on KS1 average point scores, how will this be maintained when the DfE did not collect KS1 scaled scores? There are no scores associated with the new KS1 assessments. What is the solution to the following equation:

(EXS+WTS+GDS)/3

or (EXS+WTS+GDS+GDS)/4

And how many prior attainment groups are we going to have when there are fewer outcomes at KS1? Depending on subject, between 92 and 94% of pupils are either working towards the expected standard, working at the expected standard, or working at greater depth. That’s not a lot to work with, and the fewer prior attainment groups we have the clumsier and noisier the progress measure becomes. Imagine if all pupils that were EXS in all subjects at KS1 were placed into one national prior attainment group? Imagine how big that group is. Then each pupil’s KS2 score is compared to the national average KS2 score for that group, and let’s say that is 105.7 in 2020 (NB: IT’S NOT 105.7. I MADE THAT UP). Imagine the range of scores either side of that. Is that really a useful, meaningful measure?

So, what are the options next year when we finally run out of KS1 levels?

  1. Carry on as now. This would most likely require retrospectively assigning nominal scores to KS1 teacher assessments in order to calculate KS1 average scores and establish prior attainment groups. Something like this perhaps: BLW = 70, PKF = 80, WTS = 90, EXS = 100, GDS = 110 (NB: IT’S NOT THAT. I MADE THAT UP). Or maybe 1-5, or 3, 9, 11, 15 and 21 (Ha ha! See what I did there?). OK, we can have scores for individual p-scales as we do now, but that applies to 1-2% of the national cohort. Whatever happens, we end up with fewer prior attainment groups and an even more flawed system than we currently have.
  2. Develop a new way of devising PAGS. Essentially, it’s still value added, but instead of KS1 average scores, it involves a lookup table of every possible combination of p-scale, BLW, PKF, WTS, EXS and GDS assessments in reading, writing and maths. No KS1 scores required and we’d have more prior attainment groups as a result, which should mean a more refined progress measure. We’ll still end up with a huge prior attainment group for EXS x3 pupils though; and, to reiterate, it’s still a value added measure.
  3. Go back to transition matrices. Much like Ofsted were doing in the IDSR last year (and stopped for some reason): show the percentage of low, middle and high prior attainers that achieved expected standards and greater depth compared to national averages. Low, middle, and high prior attainment could be based on combined subjects at KS1 (e.g. two or more ‘expecteds’ = middle, two or more ‘greater depths’ = high, anything else = low); or it could be subject to subject. Either way, I feel this is a step backwards from what we have now. On the plus side, it’s straightforward and transparent, and many I’ve spoken to seem to favour this, but I think we need to be careful what we wish for. It’s a return to expected progress measures and target setting won’t be far behind.
  4. Don’t bother. Have a moratorium on KS1-2 progress measures for a year or two, which is radical solution but one I’m starting to think should be seriously considered. Bad data is, after all, not better than no data at all. Plus it would do away with those nasty, overly simplistic, brightly coloured, prefect-style badges of progress in the performance tables that few people understand. Not that that stops them from making inferences about school standards. This, of course, would mean we are left with attainment alone (percentages achieving expected and higher standards, and average scaled scores). These would no doubt continue to be compared to national averages, and this could be accompanied by a statement to say whether results differ significantly from those figures. In addition, results could be compared to those of similar schools (I mean proper similar schools based on size, demographics, deprivation, and mobility, as shown in the EEF Families of schools database), with another statement to indicate any significant* difference.

I think it’s time to have a conversation about the nature and purpose of data used in the accountability system. Is a desire for numbers overruling the principles of accuracy and meaning? The recent attempts to ‘improve’ the progress measures – assigning and tweaking nominal scores for pupils working below the standard of tests, the inclusion data from special schools to reduce estimates for the lowest attainers, capping extreme negative outliers – just feels like patching up a leaky boat. And then there’s the writing progress measure, which should never have happened in the first place.

Data is being bent way beyond its elastic limit in order to derive numbers for the sake of convenience, and it increasingly feels like our entire accountability system is a product of errors and bias. Maybe it’s time to admit defeat and take this opportunity to just report the things we can report with a reasonable degree of confidence.

* I’m not too keen on the significance thing either but, on balance, it’s probably useful here.

© 2024 Sig+ for School Data. All Rights Reserved.