I Predict a Riot

I’ve come to the conclusion that I’m easily misunderstood. I rock up somewhere to give a talk, start ranting about how we have to wean ourselves off our number addiction – stop trying to measure and quantify everything that moves (and if it doesn’t move, measure it until it does) – and people are left thinking “who the hell is this guy? I thought he was a data analyst but he’s telling us to quit the data”. Well, that’s not quite true. I’m telling people to quit the dodgy data, to get beyond this ‘bad data is better than no data at all’ mindset. But much of what I see in terms of tracking of pupil progress falls well and truly into the bad data camp.

Essentially, there are three main things I take issue with when it comes to tracking systems:

1) The recreation of levels – most systems still use best fit bands linked to pupils’ coverage of the curriculum. *coughs* “sublevels!”

2) The simplistic, linear points-based progress measures and associated ‘expected’ rates of progress *coughs* “APS!”

3) The attempts to use tracking data, based on ongoing teacher assessment against key objectives, to predict end of key stage test outcomes *coughs* “astrology!”

So, earlier this week, I was attempting to explain my thoughts on points 1 and 2 when someone stated – and I’m paraphrasing here – that ‘the DfE quantify everything so we need to do the same in order to predict outcomes’. First, let me me be clear here, I am not suggesting schools give up on data – I think that would be foolhardy – I just think we need to be smarter and only collect data that has a genuine and demonstrable impact on pupils’ learning. We just have to accept that we can’t quantify everything – as much as some may want to – and admit that much of the data we generate is for purposes of accountability and performance management, not learning. Second, I do not believe we should use tracking data to predict end of key stage outcomes. It’s a bad idea for a number of reasons.

First, such predictions hinge on a school’s (or teacher’s) definition of ‘secure’/’on track’/’at age related expectations’. What constitutes so-called ‘secure’ quite clearly differs from one school to another. Many schools are using systems that provide a convenient answer for them and this is often based on the simplistic expectation that pupils will achieve a certain percentage of objectives per term, so a pupil is expected to achieve, say, a third of the objectives in the first term, two thirds in the second term, and so on. I recall an amusing yet worrying twitter conversation where teachers offered up their definitions of end of year expectations, all based on a percentage of objectives achieved. Various numbers were thrown into the ring: 51%, 67%, 70%, 75%, 80%. Interestingly no one suggested 100%, so quite clearly there are many pupils out there tagged as ‘secure’ despite having considerable gaps in their learning, gaps that may well widen over time. If we make the assumption that these pupils are on track to meet expected standards then we may well be in for the a shock. 

The next thing that creeps in are those key accountability thresholds: 65% for the floor standards, 85% for coasting. So, based on our definition of ‘secure’ or ‘on track’, which are quite possibly wide of the mark, we attempt to estimate the number of pupils that will meet the expected standard to satisfy ourselves (and our governors) that we’ll be safe this year. Breathe a sigh of relief. All this inferred from a somewhat spurious definition of ‘secure’ that varies from school to school. Worse still are those predictions based on a linear extrapolation of a pupil’s current progress gradient. I remember tracking systems doing this with point scores and the predictions were off the map. A pupil has made 4 points/steps/bands this year so we assume they will do the same over the next two years and will therefore easily exceed the expected standard (seriously, this is going on).

Next, we fall deeper down the rabbit hole and find schools that are converting teacher assessment into a sort of pseudo-scaled score. So, a pupil that is currently ‘secure’ or ‘at ARE’ will have a score of 100, whilst those pupils that are ‘above’ have higher scores, and those that are ‘below’ have lower scores. This is achieved by scoring and weighting each objective, totalling each pupil’s score and standardising it. Horrible. Don’t do this.

My overall concern is the impact these practices have on the expectations and aspirations for pupils. Will there be a concentration of resources on ‘borderline’ pupils and perhaps less opportunity for pupils to deepen their learning? Can a school really promote a learning without limits culture if it is distracted by distant thresholds? Will such approaches create a false sense of security that could easily backfire? 

And what are the consequences if those predictions are wrong, as they are so likely to be?

Obviously schools will want to have some idea of likely outcomes, and no doubt governors (and others) will request such information, but really this should only be done for end of key stage cohorts, and any predictions should be informed by the standards, test frameworks and optional testing. It is extremely risky to try to make the leap from a broad teacher assessment, at the end of year 4, say, to an end of key stage outcome, especially now when the curriculum is so new. Essentially we are attempting to link observations to outcomes based on a huge amount of supposition, and this is extremely risky. 

My firm belief is that tracking systems need to be untangled from accountability and performance management if they are to be truly fit for purpose. They should not be used to set performance targets and they should not be used for making predictions. If they are used in this way then there is always the risk that the data will be manipulated to provide the rose-tinted view rather than the warts-and-all picture that we really need. Instead, tracking systems should be very simple tools for recording and monitoring pupils’ achievement of key objectives; that allow teachers to quickly identify gaps in pupils’ learning and respond accordingly.

And if they do that then the final outcomes will take care of themselves.
© 2024 Sig+ for School Data. All Rights Reserved.