As increasing numbers of schools question the effectiveness of their incumbent systems and begin to look at alternatives that may better meet the needs of their curriculum and approach to assessment, it’s worth bearing in mind a few golden rules. Hopefully these will provide a suitable fit-for-purpose test and save schools from repeating the mistakes of the past.
1) Separate teacher assessment from performance management
Not so much an issue with systems themselves but a mistake that many schools made: using tracking data to set performance management targets. These would commonly take the form of an annual rate of progress that all pupils were expected to make, or a particular threshold that a certain percentage were expected to be above. The problem is, of course, that the reliability of data is rapidly compromised because teachers are under pressure to ensure the numbers go up every term. Consequently, teachers tick boxes to make outcomes match the targets rather than reality and the data ceases to be a true reflection of a pupil’s learning. It is somewhat ironic that when tracking data is used for performance management it becomes a potential risk to pupil’s progress by presenting an inaccurate picture and thus exacerbating the problem it aimed to solve. In reality, learning has its peaks and troughs and it is imperative that tracking data reflects this fact. We need systems that help teachers, not hammer them; and we’ll only get accurate data if teachers assess without fear.
2) Ensure your system is a tool for teaching and learning rather than accountability
Links to the above but this point is more about external scrutiny than teacher performance management. Often our starting point is: what do I need for Ofsted? But we really need to put Ofsted to one side when designing an assessment approach or deciding on a tracking system. In fact, if we want to get this right we need to pretend that Ofsted does not exist. Ofsted have no preferred approach anyway, and have stated in the handbook that they will work with whatever data the school uses to track the progress of its pupils. The same goes for LAs, RSCs, Governors and consultants: we must not choose a system on the basis that it satisfies their data cravings. Any data you produce should be a byproduct of your assessment system, not an exercise in itself. There is no value in producing data that tells you nothing about pupils learning so always ask the following questions: who is it for? why do they need it? what does it tell you? Remember: a system that is built to meet the needs of teachers in the classroom will best serve your pupils and ultimately best serve your school.
3) Reduce the complexity
Some systems are big. Really big. They require a great deal of time and effort to learn and regular training to keep on top of updates. They can also be expensive. These systems seem heavy and bloated, and every month more and more features seem to be bolted on. If you don’t keep up to date you quickly become lost amongst the hundreds of tables, graphs, charts and reports. Other systems are smaller, lighter, more nimble. They offer just what is needed to do the job and nothing more. They are a simple tool: functional and easy to use. Good apps make the complex easy. Do we apply the same standards to our tracking systems as we do to our online shopping experiences? Would you buy from Amazon, or eBay, or Ocado, or John Lewis if their apps behaved like your tracking system? Do tracking systems really need to be this complicated? Complexity wastes time. We spend ages navigating menus and report options, and all that functionality is distracting. Because it’s there we’ll no doubt try to use it, but if we haven’t got it we can’t use it, which frees up our time to do more important stuff. No headteacher should be wasting their evenings, weekends and holidays on a tracking system, printing out reports that no one should ever need to look at, that tell us little or nothing about learning. Just imagine if we were able to triage our systems, divide the functions into 3 piles: stuff we need, stuff that might be useful, bits we are never going to use. The first pile is probably going to be very small: a tracking grid containing pupils’ names alongside the curriculum objectives, and a handful of useful reports. That is the only pile we need.
4) Do not recreate levels
Goes without saying. Take a look at those groups or bands that pupils are assigned into and ask yourself this: are they best-fit categories based on broad coverage of curriculum objectives? The ’emerging, developing, secure’ bands commonly used in many systems often simply reflect the percentage of objectives the pupil has achieved rather than how securely they are working within the curriculum. In other words, they are more about coverage than support and depth and understanding. Consequently, most pupils are emerging in the autumn term, developing in the spring term, and secure after Easter. Then a point score is applied to each band and a linear point scale is established, with an expected rate of progress prescribed for all pupils. Is this really any different to the old expectation of 3b (21), 3b+ (22), 3a (23), 3a+ (24) across year 5, for example? Instead of recreating levels, how about this: if a pupils gets what has been taught to date – is keeping pace with the demands of the curriculum – they are ‘secure’, regardless of the time of year. Simple really.
5) Stop obsessing about quantifying progress
This is the hardest thing to let go. “We have to measure progress, right?” Wrong! We have to show progress, and numbers do not show progress, they show that someone has ticked a few boxes in a tracking system and that the numbers went up because we made sure they did. We must stop making the progress measure the hub on which the wheel is built. Our systems should start with the curriculum and they should be built simply to track and identify the gaps in pupils’ learning. We must resist the temptation to shoehorn the curriculum into a predefined series of point scores just so we can give a numerical approximation of progress that no one should be asking for and that tells us nothing of value. Yet that is what continues to happen: the overwhelming desire for a neat, linear progress measure – to distill learning down to a simple number – overrides the need for accuracy, usefulness and meaning. Scientists such as Pascal, Joule and Newton established their units of measurement following years of observation, testing and measuring. Our progress measures, on the other hand, are created in advance of any data: the scale is the priority and everything else is made to fit. Progress is catching up, filling gaps, deepening understanding, and overcoming barriers. As much as we’d like it to, can all this really be accurately represented by a single, simple, linear point scale? So, don’t be distracted, track the gaps in pupils learning and the progress will take care of itself.
And in case this is all too scary…..
There is plenty of justification for changing and rationalising your assessment system, to make it fit-for-purpose, to ensure that it meets the needs of teachers and pupils and ceases to be a time-consuming burden. The Commission on Assessment without Levels, the Workload Review Group and Ofsted themselves, they all provide you with plenty of ammunition to defend your position and I recommend you read their reports and the mythbusting section of the Ofsted Handbook. Hopefully we can move towards a future where we simply assess for learning and our tracking systems should exist solely for this purpose. No school should be under pressure to compromise its approach and produce spurious data in order to meet the demands of ‘the others’.
Now you just need to be brave and hold your nerve.