Age-Related Expectations?

Paragraph 242 of The School Inspection Handbook states that ‘on inspections of infant, junior, primary and lower-middle schools, inspectors will carry out a deep dive to evaluate how well pupils are taught to read. They will pay particular attention to pupils who are reading below age-related expectations (the lowest 20%) to assess how well the school is teaching phonics and supporting all children to become confident, fluent readers. This will include understanding how reading is taught remotely, where applicable.

This statement has caused a huge amount of confusion and perhaps more debate than any other part of the handbook. Who are the lowest 20%? Do they mean the lowest 20% of the school? Those pupils in each year group that fall into the lower quintile when ranked by attainment? Or do they mean those in the school that fall into the lowest 20% nationally? The clue is in the phrase ‘below age-related expectations‘. In some cohorts – or even some schools – there may be no, or very few, pupils below age-related expectations; in other schools it may be the majority. A recent conversation with an HMI provided some clarity: it’s actually a bit of both. The start point is identifying those that didn’t meet expected standards in national assessments (year 1 phonics or reading at key stage 1), and if that doesn’t amount to many (or any) pupils then the attention will turn to the weaker readers, relatively speaking.

But this ambiguity is not helpful. First, the proportion of pupils that fall below age-related expectations in national assessments is nearly always greater than 20%. The following table shows the percentages of pupils not meeting goals or standards in the various reading-related statutory assessments of the primary phase.

20192022
ELG Reading23%25%*
Y1 Phonics18%25%
KS1 EXS+ Reading25%33%
KS2 EXS+ Reading27%25%

*2022 ELG Word Reading (new framework) is not directly comparable with 2019 ELG Reading of old EYFS framework.

As the data shows, in no case does ‘below age-related expectations‘ align with the lowest 20%. The closest is the 2019 phonics outcome and this is probably the origin of the directive: historically, approximately 80% of pupils passed the phonics check in year 1 and 20% didn’t.

But there is another problem. In most cases, when we use the phrase age-related expectations, we are actually referring to curriculum-related or, more likely, assessment-related expectations. We are probably – as in the case above – talking about those pupils that have met or not met the expected standard of a test or assessment framework. It has nothing to do with a pupil’s age beyond the academic year the pupil is in. Only in Early Years is it standard practice for a pupil’s age – i.e. in months – to be taken into account when making assessments. It is therefore possible to have two pupils at different stages of development described as being ‘at age-related expectations‘ especially if they were born at opposite ends of the school year. A one year age gap at this stage is vast.

There is, however, a potential bump in the road: a summer born pupil assessed at age-related expectations – at a typical level of development for their age – may not meet the early learning goals at the end of the foundation stage. This is because the Early Years Foundation Stage Profile, like all other forms of statutory assessment, takes no account of a pupil’s age in year. And this gets to the heart of the problem: summer born pupils are disproportionately represented in the ‘below age-related expectations‘ group. The following table shows the percentage of September and August born pupils that fall short of expected standards in national assessments:

September BornAugust Born
ELG Word Reading18%*33%*
Y1 Phonics17%33%
KS1 EXS+ Reading24%43%
KS2 EXS+ Reading20%31%

*month of birth data unavailable so term of birth used instead

Therefore, when inspectors – and others involved in monitoring school standards – focus on pupils that are supposedly ‘below age-related expectations (the lowest 20%)’, there is a risk that they end up with a group of predominantly summer born pupils. And this means that summer born pupils have a greater chance of being identified as having special educational needs.

Yet, as illustrated by this excellent Datalab blog post, we know that summer born pupils make accelerated progress, that they catch up with their autumn born peers and that gaps between the two groups narrow at each key stage. So should the results of national assessments be adjusted to take account of age? Some assessments – Cognitive Ability Tests (CATs) for example – already do this. In contrast to ‘normal’ norm-referenced standardised scores, which show how pupils compare to other pupils in the same year group, age standardised scores show how pupils compare to other pupils born at the same point in the year. Using age standardised scores, any instances of low attainment are, therefore, less likely to be ‘age-related’ and more likely to result from other factors. And because norm-referenced scores correspond to percentile rank we could – if we really needed to – apply thresholds to flag pupils whose attainment might be a cause for concern:

  • <100: below average
  • <91: lowest 25%
  • <88: lowest 20% (Ofsted’s focus)
  • <81: lowest 10%
  • <70: lowest 2%

NB: Obviously you can use higher scores to identify pupils whose attainment places them in, say, the top 20% (>112).

Schools that use standardised tests can already do this but Ofsted inspectors do not take the results of internal assessments into account and the current system of statutory assessments in primary schools is not able to provide the information they require. It is not designed to show where pupils sit within the national population; it is designed to show whether or not pupils have met a particular standard. And the elephant in the room is the system’s reliance on teacher assessment, which is subjective, prone to bias and at risk of distortion when the stakes are high.  In terms of increasing efficiency, reducing workload, and improving the accuracy of primary assessment data, the proposal set out in this EDSK report is compelling: use quick to administer, online, adaptive standardised tests every 2 years. Such an approach could provide us with more reliable data and consequently better measures of attainment and progress. It would also give schools a good understanding of how their pupils compare to other pupils nationally over time.

There is certainly a strong argument for changing primary assessment to take account of age to lessen the risk of singling out summer born pupils as the low achievers. Assessments should be fewer in number, standardised, comparable with one another and generate norm-referenced age-standardised scores. And even then, the phrase ‘below age-related expectations‘ would be a misnomer; pupils with low attainment for their age would be more appropriate. This is not about re-designing the assessment system for Ofsted; this is about creating a more efficient and effective approach that would provide accurate, timely data capable of ironing out the creases caused by differences in age and allow attainment to be tracked over time. Yes, it would allow Inspectors  – and teachers – to identify those in the lowest 20% nationally – for their age! – but it would also have an interesting side-effect: a move to age standardisation would signal the end of expected standards as we know them.

Would that be so bad?

© 2024 Sig+ for School Data. All Rights Reserved.