I'm going to put this out there, even if it costs me street cred--I have a great supervisor. She's the best ESL teacher I know, and I have, in fact, seen her teach. If I have an issue, I can go to her and ask for advice. Most of the time I will use it, and if I decide not to, she doesn't get all bent out of shape over it. (I apologize in advance to the many, many city teachers who can't empathize, but there are some great administrators out there. And yes, I know others.)
This notwithstanding, I'm not terribly involved with what she writes on the observations.Like many teachers, anything effective or above means all's right with the world, and who really gives a golly goshdarn what else it says?
I know I'm not the only one who feels this way. I know in particular because I'm the chapter leader of a large school, and no one ever comes to me with an effective rating, or a highly effective rating. I get to see all of the bad ratings, though. No one is happy with those. Sometimes there are things we can do, but not enough.
Though I've railed about junk science many times in the past, and though I'm still convinced it ought to play no role in rating a teacher, I have seen it pull a whole lot of people up in my building. Last year there were no adverse ratings. One supervisor saddled an effective-rated member with a TAP, or Teacher Assistance Plan. This is a non-contractual imposition of a TIP, or Teacher Improvement Plan, mandated for teachers rated developing or lower. Unfortunately I didn't find out until May, and it took me a few weeks to get it killed.
For me, the new observation process is nonsensical, particularly when compared with its predecessor. There's so much more and better information in a freely written observation report than there is in some checklist. I really do believe that there is no way you can codify classroom instruction into a rubric. Possibilities are infinite, and there's no way Danielson could anticipate them all. I can't sit around worrying about what it says on 2b, or not 2b. I would fully read and reflect on a written report.
However, the elephant in the room is not, in fact, the mode of observation. I once did a project that a supervisor observed. It went well. The supervisor asked me for the manual I got the activity from. She copied the manual word for word, and that was my observation report. What can you do with someone so fundamentally lazy she can't be bothered doing what she's paid for?
While I liked the previous method better, if I had a lazy, crazy or incompetent supervisor it would make little or no difference. This supervisor might trash me if I were good, bad, or indifferent. I've seen this happen. In fact, in my own school I got to watch a videotape of a lesson that contradicted the supervisor's report in multiple aspects. What exactly can you do when your supervisor just makes stuff up? And for the record, a positive observation from a bad supervisor, while not as hurtful, is equally useless.
I have another member who worked in Banana Kelly. She got a negative rating there for observations. The only real issue was the observations didn't actually take place. Not only that, but one of these observations was said to have taken place on a day my member was sick at home. This is one lazy principal, who can't be bothered to check whether or not the teachers are in on the days observations are falsified. I think the principal was removed, but I don't recall whether or not we were able to fix my friend's rating.
As for the entire rating issue, if we ignore the plague of bad supervision, which no working teacher can, what we're left with is the law that enabled it. The very worst part of this law, the one that really needs changing, is the one placing burden of proof on teachers rated ineffective two years in a row. The notion that anyone is guilty until proven innocent is abhorrent and un-American.
Of course the plague of bad supervision, while ignored in most of the press, cannot be ignored by those of us that face it each and every day. I don't know how many bad administrators there are, but given what I hear I'd guess 25-40%. That's outrageous and unacceptable. All the nonsense we read in the paper is the tip of the iceberg. There's a reason not a week goes by without Sue Edelman turning over a rock, only to find some supervisor or other crawling out, and probably not for the first or last time.
Former principal Carol Burris once told me that she used to observe once a year. She would only go back if there were some issue, like a complaint, or perhaps the teacher asking for help. That's a reasonable position. This year there should be some sort of APPR training, perhaps a video with the chancellor and Mulgrew. I know the message will be that observations ought not to be used as a "gotcha" process. Of course that's as it should be. (Update:There's a video of Mulgrew and Carranza saying that right here.)
However, until and unless New York City does a deep cleaning to root out incompetence and vindictiveness in administration, no observation process will ever be worthwhile overall. I don't know about you, but I'm not holding my breath for that to happen any time soon.
I submitted the above response to the city and provided my email address in case they wanted to ask me more. I'll be happy to speak with anyone who contacts me. After I posted the response on Facebook, several teachers said they'd do the same. I told them if they did that, the educrats at DOE would have a lot of responses to ignore.
That's exactly what I expect them to do. And by the way, here's the survey. As far as I can tell, the geniuses who put it up couldn't be bothered to make sure those who answered authenticated their responses. I guess when you don't care whether or not results are reliable or honest, little things like that don't matter. The beauty part is that if anyone above them doesn't like the results, they can just go in and falsify them to their heart's content.
Good thing de Blasio kept all those Bloomberg leftovers in DOE. It isn't easy to work up a team like that.
Orion
2 hours ago