Running an innovation

20 March 2018

Author: John Coats

Notre Dame High School, Sheffield, have been awarded three IEE Innovation Grants over the past year. John Coats (@JohnCoatsND), Director of School Improvement, writes about the lessons learned so far.

If you are interested in applying for innovation bids, find out more from the IEE

Last time, I wrote about the importance of taking time to stop and ask the question ‘Is what is happening in the classroom next door actually any better than what is happening in my own classroom?’

This is the starting point for any evaluation but, as we’ve realised from running three IEE Innovation Grant Projects, it’s not always as straightforward as you think to answer this question.

Take our first Innovation Grant where we started with a vague notion (informed by anecdotal evidence from a couple of classrooms at Notre Dame) that giving students recorded verbal feedback was ‘better’ than conventional written feedback.

I was genuinely surprised at how little research there was on the effectiveness of audio feedback given the frankly enormous amount of time that is invested by teachers on giving feedback. Talk to your local Research School or access the increasingly comprehensive EEF toolkit to identify what the existing research says for your hunches.

Next we had to knock our hunch into something more coherent. What did we mean by ‘better’? When we said ‘students’, what students were we talking about? Which Key Stage? Which subjects? We decided we wanted ‘better’ to mean ‘better exam outcomes’. We also wanted it to be ‘better’ for teacher workload too? After all, if teachers were to find giving this kind of feedback less onerous, they would have more time to do everything else better or, importantly, manage a better work life balance and be better retained in the profession (again ‘better’ for students in the long term). We decided on two subjects at KS5, based on our own anecdotal in-house experience.

We now had a very specific hypothesis (or two in our case):

  1. Is verbal (using an audio tool) feedback more effective than written feedback in improving test outcomes in Sociology and Mathematics A level?
  2. Does providing verbal (using an audio tool) feedback to students rather than written feedback have a positive impact on teacher workload?

We then needed to plan an experiment involving sufficient students, receiving a sufficient dosage of audio feedback, with a sufficiently well-designed pre and post-test and sufficiently well-designed staff questionnaires to get data to answer these questions. Having planned three of these projects now, and having had each project plan undergo several iterations, I can confidently make two observations.

Firstly, there was far more to think about here than I initially thought. Numerous interesting and intellectually demanding conversations have come from unpacking the various aspects of planning these projects. For example, pause and consider how you might respond to the following questions:

  • How many students need to be involved?
  • How long should the trial last?
  • How prescriptive should we be with activities that receive feedback?
  • How long should teachers spend giving feedback (audio or written)?
  • What direction should we give to control groups?
  • What training should we give and to whom?
  • How much/little flexibility should we give participating teachers in terms of when the intervention takes place?
  • What should we use for a pre and post-test? When should the post-test take place in relation to the intervention? Who will mark it? How will we ensure consistency?

Secondly, there is no single right way to run a research trial. I can say though that each project plan improved every time we engaged in discussion with someone who was able to challenge and question various aspects. For each of our trials the IEE were handing over their money to us, and therefore understandably did a lot of the question asking. If your research project isn’t being funded, think about who you will get to unpick your plan and ask questions so that you end up with a better project plan than the one you started with. I think that involvement from others helps de-personalise the project, taking it from a personal pet project to something as objective as possible. Success becomes about getting a reliable answer, rather than a particular answer.

Our next task was to persuade the requisite number of schools to take part. We were able to offer a financial incentive to cover training costs, staff cover, etc. If you haven’t got the luxury of a financial incentive to offer, you will need a different sell – perhaps a common problem that several schools are trying to address, or you could appeal to the professional development gained by involvement.

A big chunk of the professional development comes during the ‘pre-trial’ training. Teachers won’t stick to the brief (we tend not to be a hugely disciplined species) unless they really understand the trial and what could go wrong and why. This, in my view is the high quality professional development. We spent some time unpacking the thinking behind the way the trial was organised and openly explored with participating teachers what could lead to the trial going wrong, or the results invalidated. Rather than a list of do’s and don’ts, participating teachers have gone away understanding the necessity of discipline.

I am more than reasonably confident that, whatever the outcome of the audio feedback trial itself (we will find out in July), we will have enabled a group of teachers, myself included, to become more disciplined and therefore be better able to evaluate what is happening in that classroom next door.




Posted on 20 March 2018
Posted in: Blog
Tags: , , , ,

Comments are closed.