Using standardised scores in progress matrices

Schools are always looking for ways to measure and present progress. Most primary schools have tracking systems that offer some sort of progress measure, but these are almost always based on teacher assessment and involve some sort of level substitute: a best-fit band linked to coverage of the curriculum with a point score attached. Increasingly schools are looking beyond these methods in search of something more robust and this has lead them to standardised tests.
One of the benefits of a standardised test is that they are – as the name suggests – standardised, so schools can be confident that they are comparing the performance of their pupils against a large sample of pupils nationally. Another benefit is that schools will be less reliant on teacher assessment for monitoring of standards – one of the key points made in the final report of the Commission on Assessment without Levels was that teacher assessment is easily distorted when it’s used for multiple purposes (i.e. accountability as well as learning). Standardised tests can also help inform teacher assessment so we can have more confidence when we describe a pupil as ‘meeting expectations’ or ‘on track’.
And finally, standardised tests can provide a more reliable measure of progress across a year, key stage or longer. However, schools often struggle to present the data in a useful and meaningful way. Scatter plots – plotting previous test scores against latest – are useful because they enable us to identify outliers. A line graph could also be used to plot change in average score over time, or show the change in gap between key groups such as pupil premium and others. But here I want to concentrate on the humble progress matrix, which plots pupil names into cells on a grid based on certain criteria. These are easily understood by all, enable us to spot pupils that are making good progress and those that are falling behind, and they do not fall into the trap of trying to quantify the distance travelled. They can also help validate teacher assessment and compare outcomes in one subject against another. In fact, referring to them as progress matrices is doing them a disservice because they are far more versatile than that.
But before we can transfer our data into a matrix, we first need to group pupils together on the basis of their standardised scores. Commonly we see pupils defined as below, average and above using the 85 and 115 thresholds (i.e. one standard deviation from the mean) but this does not provide a great deal of refinement and means that the average band contains 68% of pupils nationally. It therefore makes sense to further subdivide the data and I think the following thresholds are useful:
<70: well below average
70-84: below average
85-94: low average
95-105: average
106-115: high average
116-130: above average
>130: well above average
Or if you want something that resembles current assessment (controversial!):
<70: well below
70-84: below
85-94: working towards
95-115: Expected
116-130: above
>130: well above 
By banding pupils using the above thresholds, we can then use the data in the following ways:
1)      To show progress.
Plot pupils’ current category (see above) against the category they were in previously. The start point could be based on a previous standardised test taken, say, at the end of last year; or on the key stage 1 result, or an on-entry teacher assessment. Pupils names will plot in cells and it is easy to spot anomalies.
2)      To compare subjects
As above but here we are plotting the pupils’ category (again, based on the thresholds described above) in one subject against another. We can then quickly spot those pupils that are high attaining in one subject and low in another.
3)      To validate and inform teacher assessment
By plotting pupils’ score category against the latest teacher assessment in the same subject, we can spot anomalies – those cases where pupils are low in one assessment but high in the other. Often there are good reasons for these anomalies but if it’s happening en masse – i.e. pupils are assessed low by the teachers but have high test scores – then this may suggest teachers are being too harsh in their assessments. It is worth noting that this only really works if schools are using what is becoming known as a ‘point in time’ assessment, where the teacher’s assessment reflects the pupil’s security in what has been taught so far rather than how much of the year’s content they’ve covered and secured. In a point in time assessment, pupils may be ‘secure’ or ‘above’ at any point during the year, not just in the summer term.
But what will Ofsted think?

The myth-busting section of the Ofsted handbook has this to say about tracking pupil progress:
Ofsted does notexpect performance and pupil-tracking information to be presented in a particular format. Such information should be provided to inspectors in the format that the school would ordinarily use to monitor the progress of pupils in that school.
Matrices provide a neat and simple solution: they are easily understood by all, and they allow us to effectively monitor pupil progress without resorting to measuring it.

Definitely worth considering. 
© 2024 Sig+ for School Data. All Rights Reserved.