The Progress Myth (revisited)

As another story filters through of a headteacher wasting their time making up progress data because their system doesn’t produce data in a format that is acceptable to the consultant carrying out a school review, I thought i’d revisit the issue of progress calculations based on tracking data. I could rant about this all day but essentially my concerns with points-based progress measures are threefold: 

1) They turn learning into a race, encouraging pace at the expense of depth, and as such are at odds with the principles of the new curriculum

2) The points are linked to broad best-fit bands (i.e.levels), which obscure the detail of learning and risk moving pupils on with gaps, and so, again, are at odds with the principles of the new curriculum.

3) The points are often used for performance management and so there is a high risk the data will be corrupted in order to show the expected picture. 

Seriously, this data is fabricated, meaningless nonsense that bears no relation to pupils’ learning and has no positive impact upon it. Such approaches are based on arbitrary thresholds between made-up categories for which there is no rationale or credible statistical basis. As such we end up with inconsistent, inaccurate data that cannot be relied upon for in-school comparisons, let alone between schools. We should not – MUST NOT – produce data solely to satisfy the data cravings of external agencies, be that Ofsted, LAs, RSCs, even Govenors. Whatever data we supply should be a byproduct of our assessment system, not an exercise in itself, and there is plenty of ammunition out there to defend yourself with. Read the Ofsted Handbook, the report of the Commission on Assessment without Levels, the data management report from the Workload review group – they all say that you have the freedom to track pupil progress in any way you see fit, that complements your curriculum. And no one is stating that we have to quantify progress; only that we need to show it. An important distinction. The most important thing is that our approach to assessment has a genuine and demonstrable impact on pupils’ learning. There really is no value in collecting and producing data beyond that remit. It is a waste of time. 

Everyone really needs to wake up to that fact that we are recreating levels in a thousand different ways and reproducing all the issues associated with them. Tracking to support formative assessment should be all about the detail: the gaps, the strengths and weaknesses, the depth of understanding, the support required. Once you introduce thresholds and broad categories with associated points scores, the game is lost. You are back to a numbers game and the system is no longer fit for purpose. 

If you want robust progress measures, then use tests, monitor changes in pupil’s percentile rank, reading age, or standardised scores. Perhaps providers of optional tests should develop their own interim VA measures. Or maybe – and here’s a radical idea – people need to stop relying on data and actually look at pupils’ work over time. We have to realise that progress measures based on teacher assessments are counter-intuitive, counter-productive, and a potential risk to pupils’ learning. And understand that numbers in a tracking system do not prove that pupils have made progress; they just prove that someone has entered some data into the system.

Schools need to be more confident in dealing with people who request such data. A thorough understanding of the reasons for the removal of levels is vital, to counter their demands for data that is essentially levels-based. Using a system that doesn’t produce such data is also a good step – if you haven’t got it, you can’t use it. And then we all need to practise the following phrase:

“We don’t do that in this school; it has no positive impact on learning”

 Keep the faith and we’ll get there.





© 2024 Sig+ for School Data. All Rights Reserved.