I came across this TES article today via Twitter, Leadership – Take a long hard look at mentoring written by Ian Rivers, who besides being a writer is a professor of human development and head of the School of Sport and Education at Brunel University London and also a keen Tweeter.
The article sets out a guide for mentors of trainee teachers specifically in schools and so I imagine on the direct route in to teaching. It makes perfect sense and seems to be more than adequate in terms of ensuring that both parties understand what is required and expected of them at the outset, that good targets are set, useful records are kept, communication is clear and support from the university mentor is encouraged and put to best use. The end result should be a job well done which prepares the NQT for the future.
After a ‘chat’ with Ian two main themes emerged; consistency and quality. Now aren’t they two novel concepts? These words pop up all over the place in teaching and for good reason, if a framework or guideline is going to be developed it has to work consistently and be delivered consistently. Not only does it need to be of a good quality, the consistency of its delivery can only be measured against the quality of its delivery. How do we know how good something is if nobody identified what good is and how do we know it’s being done properly if we don’t measure it against whatever that good is?
So how do we decide what good is when it comes to mentoring trainee teachers? In my experience (in other fields as well as teaching) there is good practice done well, bad practice and good practice done badly. I think the first step to finding good is to separate out these three areas.
The bad we can ignore for now although it may be useful to use as an example or indicator of how things should not be done. Sometimes people need to see bad practice to understand how important good practice is.
The good done well can be set aside as given that it works and it should be replicated.
The good done badly is perhaps where the most work needs to be done and where the most development will take place, it holds the largest potential for learning and improvement.
One of the best ways to gather this information is to ask about experiences gathering qualitative and quantitative data. Who feels it knows it and all that, we need to find out from the horses mouth what works, what doesn’t and what is missing. So we need to ask people who know; recent trainees, mentors and university link staff who support mentor and trainee. All three are involved in the process, all three will have tales of where it works and where it doesn’t. But more than asking what works we need to ask what the value was. If we are talking about measuring quality we need to know the worth of something.
A process might be good but was it worth anything? What did it add. For instance keeping notes of every mentor/trainee meeting might be useful but what did it add in terms of value for time and resources used? Was it worth doing? Could a couple of actions jotted into a planner have done the same job and taken less time? Should the quality indicators be:
- Did regular meetings take place
- Were actions identified
- Were development points identified
- Were actions followed up at next meeting
- Were development points added to trainee’s development plan (which would then link to a QIs around the development plan e.g was a development plan maintained, was it discussed with university tutor etc)
If these are the indicators then the swathes of paper being printed and emailed around could be spared, as could time spent doing it because they can be met with jotted down notes in a planner.
For quality purposes the mentoring process would have to be dissected into its component parts and again they would be the most crucial elements of the process and may include:
- Mentor/trainee/university communications – possible QI how many interactions would be expected, records kept
- Development plan – possible QI frequency of update, inclusion of items from meetings and observations
- Classroom observations – possible QI frequency, range of classes, anticipated levels of improvement
Perhaps quality could be measured in terms of improvement in a number of areas on a scale for which a benchmark in terms of end point or improvement against starting point could be devised. Mentor, trainee and university link could mutually agree position on the scale.
- Competence – how well they plan and teach and how well did they demonstrate knowledge of their subject
- Confidence – how their level of classroom confidence increases not just in terms of at what point they stop shaking but how they begin to become more flexible against the lesson plan, take advantage of seize the moment opportunities, manage behaviour situations
- Satisfaction – how do they feel about the mentoring process over all, how close did it come to meeting expectations
- Integration – into the teaching team and the profession, do they contribute to staff meetings
It could be that a scale is set with ten points and it is anticipated that a trainee will progress at least 4 points during the training period or that they will achieve the 7th point as a minimum by the end of the training period. Final feedback from mentor and trainee would show if those QIs are being met, if they are set too high or too low or if they have any value.
Another idea is that mentoring of trainees should be done by teachers who are not too far removed from the memories of trainee days themselves. I’m not saying older teacher’s have memory loss but they may have been trained in very different ways. A relatively newly qualified mentor will have tales to tell that are relevant rather than “back in my day” stories.
Being a mentor should be time spent which counts towards CPD. Making it a target for a mentor to produce a decent NQT is tricky because a mentor can’t be blamed for a trainee who runs screaming from the building after a half term deciding that teaching is not for them after all, but there could be some targets set and measured against for the mentor much in line with those suggested as quality indicators already. Particularly if the mentor is graded against the 4 areas mentioned more latterly and that grading can be influenced by the trainees own input and feedback.
Of course consistency and quality are key because there are other routes into teaching such as pre-service teaching qualifications which need to be considered and idiosyncrasies in different areas of teaching, from Early Years to Secondary to FE and HE but most of these suggestions for quality can easily be mapped onto each scenario and consistency can be assured. It should be anticipated that a direct entry trainee has a similar experience to a pre-service student on placement in terms of quality of experience and of mentor.
As a pre-service PGCE student who had an entirely different experience on placement to that imagined I would advocate anything which seeks to establish and maintain consistency and quality and which encourages and utilises feedback on the experience. It is one thing to complete course expectations and another to have found the experience as useful as it might have been and surely if something was not as useful as it might have been, the quality was not as it might have been and that could be because there was no set of QIs to make it accountable against.
These are just my immediate thoughts and I’m sure that Ian would be delighted to hear any others from all perspectives. You can leave comments on this blog or you can leave comments directly under his TES article or by contacting him via Twitter. I’m very interested in this so shall be keeping a keen eye out myself.