The Progress Obsession

Despite my best efforts to convince people of the futility of the exercise, probably the most common question I get asked is:

“How do I show progress?” 

Why is this futile? Because what they are really asking is: “How do I use data to ‘prove’ that pupils have made ‘good’ progress?”

The reason for the inverted commas is because data does not really ‘prove’ anything – especially when it’s based on something as subjective as teacher assessment – and what constitutes ‘good’ progress varies from pupil to pupil. What is regarded as ‘good’ for one pupil, may not be enough for the next. One pupil’s gentle stroll is another pupil’s mountain to climb. Progress is a multi-faceted thing. It is catching up, filling gaps, deepening understanding, and overcoming those difficult barriers to learning. It can be accelerating through curriculum content, or it can be consolidating what has been learnt; it can mean no longer needing support with fundamental concepts, or it can be about mastering complex skills. Different pupils progress at different rates and get to their destination in different ways.

Progress is not simple, neat or linear – there is no one-size-fits-all pathway – and yet all too often we assume it is for the sake of a convenient metric. We are so desperate for neat numbers – for numerical proxies of learning – that we are all too willing to overlook the fact that they contradict reality, and in some cases may even shoot us in the foot by presenting an average line that no one follows in reality. Rather than a line that fits the pupil, we make pupils fit the line.

Basically, we want two numbers that supposedly represent pupils’ learning at different points in time. We then subtract the first number from later one and, if the numbers go up – as they invariably do – then this is somehow seen as evidence of the progress that pupils have made. Perhaps if they have gone up by a certain amount then this is defined as ‘expected’, and if it’s gone up by more than that it’s ‘above expected’. We can now RAG rate our pupils, place them into one of three convenient boxes, ready for when Ofsted or the LA advisor pay a visit. Some pupils are always red, and that frustrates us because it doesn’t truly reflect the fantastic progress those children have actually made, but what can we do? That’s the way the system works. We have to do this because we have to show progress.

Right?

First, let’s get one thing straight: data in a tracking system just proves that someone entered some data in a tracking system. It proves nothing about learning – it could be entirely made up. The more onerous the tracking process – remember that 30 objectives for 30 pupils is 900 assessments – the more likely teachers are to leave it all to the last minute and block fill. The cracks in the system are already beginning to show. If we then assign pupils into some sort of best-fit category based on how many objectives have been ticked as achieved (count the green ones!) we have recreated levels. These categories are inevitably separated by arbitrary thresholds, which can encourage teachers to give the benefit of the doubt and tick the objectives that push pupils into the next box (depending on the time of year of course – we don’t want to show too much progress too early). Those cracks are getting wider. And finally, each category has a score attached, which now becomes the main focus. The entire curriculum is portioned into equal units of equal value and progress through it is seen as linear. Those cracks have now become an oceanic rift with the data on one side and the classroom on the other.

Assessment is detached from learning.

This rift can be healed but only if we a) wean ourselves off our obsession with measuring progress, and b) sever the link between teacher assessment and accountability. Teacher assessment should be ring-fenced: it should be used for formative purposes alone. Once we introduce an element of accountability into the process, the game is lost and data will almost inevitably become distorted. Besides, it’s not possible to use teacher assessment to measure progress without recreating some form of level, with all their inherent flaws and risks.

Having a progress measure is desirable but does our desire for data outweigh the need for accuracy and meaning? Do our progress measures promote pace at the expense of depth? Can they influence the curriculum that pupils experience? And can such measures lead to the distortion of data, rendering it useless? It is somewhat ironic that measures put in place for the purposes of school improvement may actually be a risk to children’s learning.

It’s worth thinking about.
© 2024 Sig+ for School Data. All Rights Reserved.