Predicted grades: Between Scylla and Charybdis

It is predicted grades time and I am agonising more than ever.  It has always been a task that has filled me with dread but this year it is at best overwhelming.  My anxiety about the new specifications and the unknowns about grade boundaries that have sent me into a spin.

Progress and prediction data has always been a tricky thing.  We all attempt to feed the data machine with professional integrity; although the complex relationships that surround teaching and learning are often not best described by a drop-down box.  Progress may be non-linear and spiky for some and we should worry less about turning a spreadsheet from red to green.  Predicted grades are also awash with practical pitfalls for example; if you share classes and units, when attainment is divergent on different topics and papers.  However, if we do not give useful data then tutors, heads of department, heads of year and senior leaders are not able to target support where needed.  A tricky task but one that I felt I knew where I stood and what was expected of me.

We have had a new data box included this year, one where we have to select from a drop-down menu and choose: on target, above target or below target.  This has got me in a real pickle.  What on the face of it should be a simple question has got me thinking about how I track progress and predict grades for my students.  How reliable and valid are my efforts?

On the one hand, it represents an attempt to streamline the data demands and have a clearer narrative about student progress.  In a world without levels, I guess we have to pitch our tent somewhere.  The instructions seem quite clear; the school is asking in my experience, as the classroom teacher do I think the students are either on target, above target or below target.  This is a type of data task I have been carrying out for over 16 years and has not really bothered me in the past.  But now, I am left wondering.

The problem is  … I may have undermined my own belief in the data.  In the old days, I felt it was so simple.  I could differentiate between those who are doing well and those who were struggling.  Send some postcards home for those exceeding expectations and then arrange  some intervention for those below the expectation, a sort of educational triage if you will.  So far, so familiar.  What on earth am I moaning about.

I guess I am wondering about how accurate I have been in the past and whether my belief in the data was doing more harm than good.  The problem with teaching subjects in a list of topics and themes is that progress really can only be judged overtime.  Just because someone writes a good essay on Marxist approaches to crime, it does not necessarily mean they will translate this ‘skill’ into some of the more difficult aspects of post-structuralism (tbh, I am not sure I could write a good essay on post-structuralism).

Well, you see a further complication is that we are all teaching new specifications this year and new schemes of learning.  Without any experience of the marking rubric and enough data on standardisation it is very difficult to know where the boundaries might lie and therefore almost impossible to offer a valid prediction.

Well, what can I do?

I can grade work on the last topic give an estimate of the mark band and translate that into potential grades.  I can qualitatively identify the skills they are using and which assessment objectives they can demonstrate in their essays.  I can make comment on the softer skills; the effort put in, classroom behaviour, home learning, engagement and so on.  But on the grade … the quantifiable, statistically reliable, objective assessment grade …  I am going to have to use a mixture of evidence and best guess.

What I am now anxious about doing is using my intuition to guess what this ‘working at grade’ will mean in the summer exam series.  The problem with using your gut instinct is that it opens the possibility for a range of biases and pre-judgements.

There is also the problem of for whom this data is for.  A low prediction may crush a student and cause a self-fulfilling prophecy.  Whilst a more positive reading of the runes, a ‘best case’ scenario may mean my predictions are too positive which distorts the schools overview of progress.  So as I mount the tight-rope of predicted grades in a world of known unknowns, I do so with a knot in my stomach wondering if I can avoid the jaws of accountability and out swim the whirlpool of student despair.