Those familiar with ‘comment is free’ on The Guardian website will know that nothing divides opinion and generates quite as much vitriol as an article about the latest smartphone. Not even the recent debate on Scottish Independence could compete with the iPhone 6 in terms acrimony. It’s a tribal thing, and I’ve noticed something similar in schools. You can go into a school and tell them their results are poor and it’s a fair cop. Criticise their tracking system on the other hand and people get seriously defensive. But criticise them I will, and more so this year when many systems have paid lip service to assessment without levels by doing a bit of window dressing.
Over the course of this term I’ve spent much of my time discussing what various tracking systems – both commercial and those developed in house – are doing with regards assessment without levels, and a certain track by The Who gets lodged in my brain on a loop.
“Meet the new boss, same as the old boss.”
In many schools the tracking system rules, to the point where I’ve heard senior leaders say “we like this method of assessment but it doesn’t fit with our system”, which is depressing and utterly the wrong way round. The tracking system is supposed to be a servant, not a master; a tool to inform teaching and learning, for self-evaluation. It should be tailored to fit the new curriculum not the other way round but here we are attempting to shoehorn the new curriculum into systems that are deeply entrenched in the old 3 points per year methodology. These 3 points may now be called steps, and may be rebadged as ’emerging’, ‘developing’, ‘secure’ (these seem the most common terms) but let’s not kid ourselves: they’re just levels by another name with some even attempting a conversion back to old money. A case of the Emporer’s new tracking system.
I think many of us assumed that, despite the new curriculum and the death of levels, progress would continue to be measured in much the same way, with extension into the next year’s curriculum being the order of the day. So, pupils could be categorised as working above (or way above) age-related expectations and progress shown in terms of years/months, or in points, much as we have done previously, with 3 points/steps being the expectation. An ‘exceeding’ child would be one working above their curriculum year, and good progress would be 4 or more steps a year.
Well, that’s what we thought. But then the penny dropped: it wasn’t about extension, it was about depth of understanding. All that mastery business.
So we have systems that were built to show rapid progress towards a goal of as many pupils as possible being above age-related expectations, now trying to measure achievement in a curriculum, which expects all (or nearly all) pupils to cover the broadly same content at the approximately same rate, it’s just their depth of understanding that will differ. As a headteacher said to me recently: “coverage is irrelevant”. I’m still not sure how true that is but it’s a cool soundbite and would look neat on a t-shirt.
And so, as this big, weighty penny hits the ground with a loud clang, the advice has changed . The original answer to the question about how we show progress – i.e. “just classify them as a year above” – has changed to “don’t classify them as a year above”. Pupils working in the next year’s curriculum become the exception rather than the rule. I note that this shift in thinking has resulted in the quiet dropping of the term ‘exceeding’ from the tracking and assessment lexicon as people realise that ‘exceeding’ is a difficult thing to define when pupils are no longer progressing into the next year’s curriculum and beyond, but are instead drilling deeper.
What this means for many schools, as they carry out their autumn assessments and enter them into their tracking systems, is that pretty much all pupils are being categorised as ’emerging’ in terms of that year’s objectives. Next term they’ll be ‘developing’ and by the summer they’ll all be ‘secure’. Hurrah! But a tracking system that doesn’t adequately differentiate between pupils is fairly pointless really; and what’s missing from all this is the depth of understanding. The terms ’emerging’, ‘developing’ and ‘secure’ are generally being used to denote coverage of curriculum content, each linked to a third of objectives achieved (or a sixth if across 2 years). They do not indicate the degree to which the pupil has mastered the subject. That’s a different matter entirely and one that is only just beginning to be addressed by tracking systems, most of which are still locked into a concept of progress based on rapid extension.
Ironically, it is the lower ability pupils that these systems serve well as they race to catch up with age-related expectations, and are therefore able to make rapid rates of progress in the traditional sense of the word. Pupils that are where you expect them to be at the start of the year will probably all make expected progress, and best not have any higher attainers. They’ll most likely go backwards this year, and make expected progress after that.
Clearly there needs to be a radical rethink of our approach to the tracking of assessment data where depth of learning is central to our concept of progress rather than some add-on feature. But there are still lots of questions to answer and debates to have over the course of this year. Can we confidently define a level of mastery at any point in the year? Can we use an average depth of understanding to compare groups of pupils or subjects? Can we track progress through changes in the depth of understanding? Is that any more or less useful than APS? Can an SEN pupil working at a much lower ‘level’ still show mastery in their work? I hope so. However, until we let go of the comfort blanket of pseudo-levels we’re not going to solve these issues and come up with a fit-for-purpose system that works in harmony with the new curriculum rather than attempting to straightjacket it.
So, forget the old boss and do something different.
We won’t get fooled again.