This is a tough one. When reflecting on my own teaching, I found that I could gauge a sense of my own success from the indicators immediately in front of me: pupils’ questions, their behaviour and my feeling at the end of a lesson. In reality, I didn’t go through a habitual reflective process after every lesson; it’s not possible nor is it particularly helpful in my opinion. My teaching approach would vary according to the year group (secondary) or topic (Geography) but fundamentals would be as consistent as I could make them (take-off, transitions, landing to use an old analogy). So over-analysing every lesson was unlikely to help much.  

As I learned my craft and got promoted, I had to learn how to evaluate my team members’ performance. The challenge is, that when a teacher has what you think is a clear sense of one’s own measurement of performance, how do you apply that or another set of measurements to someone else’s performance? For example, does their questioning repertoire have to include the same steps as yours? Of course not. When self-reflecting on performance, you are combining measurements taken over a period of time and composed of a range of tasks that reflect different attributes. Usually, those reflections are applied to a framework so that you have the opportunity to measure improvement. So, how do you judge another teacher’s performance? In few professions do individuals achieve the same outcomes while undertaking the exact same moves to get there. Nor does so much analysis of proxy data result in an evaluation of your personal, real performance. I think that the challenge exists because the cause and effect relationship between a teacher’s actions and children’s development/learning is complex. For instance, it can be easier to attribute a causal link to the teacher’s performance if their pupils are failing because the teacher is simply getting content wrong, delivering learning in a non-sensical order, or doesn’t turn up to teach. Attributing causal links positively should be equally straightforward. Somehow attributing positive causal links has become a growing industry of its own within schools.  

As a school leader and inspector, I understood the importance of being able to evaluate teaching – for teachers and pupils. I combined a range of observable factors to triangulate a view of what was working. That statement on its own raises questions – what about the unobservable factors, non-measured or measurable (there and then) contexts. Lesson observations, work scrutiny, discussions, and performance data are all useful to some extent because one can form a hypothesis that reflects an emerging truth. But each evidence source has its flaws. For example, lesson observations undertaken in the traditional way (full lesson, pad and pen, reflection meeting afterward, performance management target achievement) will always see a version of what would have happened with the observer in the room. Without the observer in the room, it will be different. Observers can’t see everything that matters to a child or group’s learning experience and will view things through their given lens (typically shaped by a framework, their experience or an expectation). Work scrutiny means seeing what pupils have interpreted given what they’ve experienced. Similar issues exist with interpreting pupil outcome data. Discussions with those involved are like any review or historical reflection – they’re a version of what happened. I remember undertaking a review process with a group of teachers once where the information from lesson observations that day was at complete odds with the class’s work in books and outcomes data. That was brilliant in a way because it led to a rich discussion about the teachers’ use of worksheets and scaffolded (non-written down) question and answer sessions at specific points in the course (which I didn’t see). It could have been relatively easy to identify things that didn’t work to improve pupils’ learning. But the discussion and consideration of broader indicators of learning meant we got a good sense of what worked. It took time but without the longitudinal type view it would have been hard to express clearly, concisely and with the same degree of confidence what made the difference to those learners.  

One way to improve the reliability of that kind of triangulated evidence base may be to adjust their frequency and improve the skills of the people doing the measuring. Another way is to change the nature of the task. For example,  scrutiny of work undertaken in a teacher’s lessons might start to look at curriculum coherence instead of the feedback written by the teacher on a page. Leaders can also stop ‘observing’ lessons in favour of ‘dropping in’ as part of learning walks. But those alternative versions of existing methods require time, change management and training to improve their reliability over the status quo. Another route is to invest in the teaching staff’s professional development so that the likelihood of any performance measurement indicating a positive outcome goes up. A recent publication by Sims et al. (2021)1 provides clear advice on teacher professional development and is certainly worth reading. Fundamentally, school leaders have to nurture a culture of positive and collaborative development in order for a measurement of impact of any teaching to be meaningful, otherwise you’re likely only going to measure the impact of an overriding influence. I joined a UCL CEPO research webinar recently, in which Laureate Professor, Jenny Gore underscored the point that we must build up teachers’ optimism and trust when implementing professional development programmes, otherwise they can’t inspire the positive intended change. This webinar also explored far more thoroughly the inherent challenges of measuring teachers’ impact.  

In my last blog, I wrote about the feeling I had when writing summary impact reports for governors or Trustees. Of course, the purpose of those reports was to inform strategic decision making that would result in sticking or twisting on the teacher development approaches taken up to that point. The responsibility is enormous to get it right. Choosing well what is likely to work out best for the teachers and children that you serve is hard.  

My view is that schools should try to [re]assert the bravery and confidence that school teachers and leaders have demonstrated in their day-to-day to do their jobs. If they apply that confidence to asserting their combined views on what they know works in their contexts or across their school groups then the likelihood is that their decisions will payoff for their teachers and pupils. Then decision makers have the scope to decide what training and support best suit their colleagues because the culture and vision will be clear to those who need to believe in their leadership. The measurement of teaching impact will naturally follow. 

Ebook Mockup

SCHOOL OF THE FUTURE

GUIDE BOOk

The School of the Future Guide is aimed at helping school leaders and teachers make informed choices when designing the learning environments of the future using existing and upcoming technologies, as they seek to prepare children for the rest of the 21st century – the result is a more efficient and competitive school.

Related Blogs