Anyone who’s read my recent tweets or blogs, or been to one of my ranting training sessions, will know that the concept of linear progress – that pupils should all progress at the same rate regardless of start point – has become a bit of an obsession. As I’ve blogged about previously, it concerns me that so many established tracking systems are maintaining a continuous point scale (points now called steps) in order to track progress. Deep down we all know this is not how children learn – it’s one of the main reasons why getting rid of levels is a good thing – but it makes tracking easier if we distill progress down to a simple number: an expected rate of progress that applies to all. So, we define a unit of progress and expect all pupils to make three of them per year. Why three? Well, because there are three terms per year and that’s what our systems have always done so redevelopment is minimised. Meet the new boss, same as the old boss. We therefore continue with the deep-rooted, universal expectation of progress for all pupils because it makes tracking easy. The system dictates the measure.
Seriously, if systems dictated everything then we’d probably shop in alphabetical order.
4 steps good, 3 steps bad (or average, or expected, or not good enough) has become doctrine and lives on in a new guise. Hardly assessment without levels. Just look at your system and ask yourself this: how does it define better than expected progress? Does a pupil need to tip into the next year’s curriculum in order to gain that all important 4th point? If so, then you should be concerned. Unless we seek out ‘real’ alternative approaches to tracking progress we are destined to continue down the same path, focussing on pace at the expense of depth of understanding, and repeating the mistakes of the past.
On two separate occasions recently – once on Twitter and once during one of my sessions – I’ve been asked what the alternative is to linear approach to progress measures. On both occasions I’ve suggested simply comparing the percentage of objectives in a particular subject that are deemed secure at, say, the start and end of the term. And on both these occasions the response has been the same: “but that islinear progress!”.
No it’s not.
It only becomes linear if we assume and apply a common expected rate to the data and make a judgement about the pupil’s progress by comparing the percentage change against an arbitrary threshold. For example, we expect the percentage of ‘secure’ objectives to increase by 33% points each term, which is common rule in many established tracking systems. So, if a pupil moves from 40% to 70% secure, they have failed to make the expected progress but if they progress from 40% to 75% they’ve done OK. And if one pupil progresses from 70% to 100% secure they’ve made the same amount of progress as a pupil progressing from 40% to 70%. And perhaps if they progress by 40% points or more across the term they’ve made better than expected progress because that’s a nice, neat figure that we can all easily remember.
So, stating that a pupil has progressed from X% to Y% isn’t the problem as I’ve not yet drawn a conclusion from those two observations. The problem is when we then seek to categorise the progress pupils make by applying a common rule to the data – a universal constant of learning. We simplify the data and neaten it up to make it easier to understand; so that we can make a quick and easy judgement about pupil performance. Ultimately we want to have a magic number, standard unit of progress that allows us to compare progress of pupils of differing abilities, from different start points, even in different years and subjects, so that 3 steps in maths in year 2 supposedly has the same value as 3 steps in reading in year 6.
The truth is that linear progress is an easy concept to get our heads round so we accept it despite knowing that it’s wrong. However, as is often the case with data, the more we simplify it, the less useful or meaningful it becomes. We’re just pigeon holing pupils for the sake of convenience. It doesn’t mean anything, it’s just makes our life easier. Plus, it’s how the system works, so we have no choice, right?
So what are the alternatives to measuring progress in this way? Here are a few options:
Option 1: Do nothing
As I’ve discussed in my previous post, The Progress Myth, maybe progress isn’t something we can quantify or categorise; and maybe we shouldn’t attempt to. Instead, perhaps we should simply use our tracking systems to identify gaps in learning so that teachers can better support their pupils. A brave and radical move away from making judgements about pupil progress based on arbitrary thresholds, but maybe it’s what we should be doing. Maybe we should stop trying to quantify the quantifiable.
Option 2: A teacher assessment of progress
Almost as brave as the above but makes sense when you think about it. If teachers can make an assessment of attainment, then why not progress? Why do we need to rely on a system to quantify and make a judgement on progress based on some calculation that we don’t agree with? Imagine if the teacher made an assessment of the progress pupils made that took into account start point, expectations, targets, attitude to learning, effort, learning difficulties, and other influencing factors. Is that really so radical?
Option 3: Establish progress pathways
If we accept that different pupils learn at different rates in different subjects from different start points, then we could attempt to establish progress pathways for a more meaningful approach to the tracking of pupils’ learning journeys. Essentially, individualised target setting whereby progress is checked against appropriate interim milestones set for the end of each term or year. Not easy to establish but more meaningful than a straight line.
Option 4: Track towards an end of Key Stage VA estimate
This is attractive on two counts:
1) VA will be the only measure of progress from 2016 onwards and considering the anxiety around the proposed attainment floor standards (85% meeting the expected standard at Key Stage 2), schools are going to want to pull apart and understand VA more than ever.
2) VA involves comparing a pupils attainment against an average outcome for pupils with the same prior attainment nationally, which means that it does not involve a universal expected rate of progress like the levels of progress measure. It is therefore fairer.
The problem with tracking towards such a distant target is that systems generally do an awful job of it. Common methodology involves generating an end of key stage prediction based either on an expected rate of progress or extrapolating the pupil’s actual rate of progress to date. The former is meaningless (see above) and the latter is highly inaccurate because we can’t assume that pupils will continue to progress at the same rate because progress is not, well, linear. Instead, to do this properly we’d need to establish appropriate interim milestones that don’t necessarily sit along a straight line. Obviously, VA estimates are going to be vitally important; it’s just that we haven’t really worked out how to track towards them yet.
(Right now, with so much ambiguity around assessment and tracking, I’m favouring option 2.)
(Right now, with so much ambiguity around assessment and tracking, I’m favouring option 2.)
And so……
We know that the concept of progress is changing. We know now that it is more about depth and less about pace, and that our systems are struggling to adjust. I’m seeing too many systems that are awash with red because pupils are apparently below age-related expectations and are making poor progress or have gone backwards. Some schools wearily accept this whilst others attempt to work around it, perhaps tinkering with the assessments to arrive at a more acceptable range of figures and colours. This can’t continue.
We don’t really have a clear idea how any of this will work and that’s why we need to stop and think long and hard about how we track and monitor pupil progress. We certainly shouldn’t blindly accept the approach employed by the systems we use. Ask yourself this question: Does your current system provide a fair, accurate and meaningful representation of the progress pupils make? If it does and you like your system, that’s great, but if it doesn’t then ask your provider about alternative approaches. If there is no alternative approach, then seriously consider changing the system. Do not make compromises to your preferred approach.
And always remember the mantra: the system must be tailored to fit the way you assess, not the other way round.