To be honest, I have always struggled with predicted grades. It is the classic battle around student data between formative and summative assessment. Teachers want useful formative data to help diagnose a students strengths and weaknesses. Whilst schools, senior leaders and parents all want to know what grade they are likely to get at the end of the course. The answer is at best problematic in terms of the reliability and validity of this judgement alongside the more complex issues of student motivation, labelling and our own unconscious bias. Is it helpful or a hinderance to be labelled this close to the real thing? Will this grade motivate or decimate this student and the progress they are making in this subject? The process of predicting a grade is far from a science and influenced by many factors, for example how easy is it to give an accurate prediction for a class you share? If they are a C with you and an A with your colleague – does a B do their learning justice?
At this point in the year, we have still not finished teaching the course content or honing the written skills needed in the upcoming exams. It is amazing what happens in the last few weeks as we gear up for the real thing. Whilst we have a range of formative and summative assessments to reflect on, what do we really know?
Daisy Christodolou uses a useful analogy of marathon running in her recent book Making Good Progress. Marathon runners are judged by their final time in minutes and seconds but their coaches carry out a wide range of activities to help them perform better including running some marathons and half marathons but equally lots of activities that focus on the skills needed. For us in the classroom it might include vocabulary testing, multiple choice, question-decoding and so on. When all these different measures are considered it may be possible to tease out some themes and patterns about final performance but what happens in training is not always a good proxy. Christodolou says the triangulation of all this data may make it more likely that the runner will go faster but they will not guarantee it – it is probabilistic but not deterministic. Furthermore, doing lots of marathons before the big race may actually impede learning as we over-focus on summative tasks and not enough on improving the nuts and bolts of good subject knowledge, extended writing and exam technique. Predicting grades is a complex process.
This annual charade has been made even more ridiculous by the omni-shambles of curriculum reform as we move to linear exams in A-levels and a new grading system in some GCSEs. There is a much higher degree of uncertainty this summer, we do not really know what the grade profile will look like and whilst the national trends will be similar there is likely to be much more volatility at school and department level. I know I speak for all teachers when I say thank heavens for more uncertainty in an already uncertain world. Well, at least our pay and career progression is not linked to it … oh hang on.
It seems as though Ofsted have also come to the view that predicted data may not be valid, reliable or useful as we enter this period of exam change.
Sean Hartford (March 2017)
In short, it’s a mug’s game at times of change in qualifications, and should be avoided. That’s why I have written to all our inspectors in the March 2017 ‘School inspection update’ to ask that they do not request predictions for cohorts about to take examinations
This is great news, although I suspect school leaders, parents, governors and students will still expect it. However, he continues to make a sensible suggestion that as we go through this period of curriculum change that inspectors will not put too much reliance on test and exam data.
Good heavens. To be honest, it is Ofsted who have in part created this data behemoth and accountability culture. I am not suggesting that we should not be accountable for what we do but it is good to see them begin to remove some of its horns which might allow for a real discussion about learning and progress.