We had a department meeting on Thursday. We discussed the MOSL, the measures of student learning by which we will be judged. We're ESL teachers, and as you may know, that means our kids are English language learners. So we have to determine some way that we will be judged by their test scores, because Bill Gates thinks that's the only thing that matters, and therefore Arne Duncan and Barack Obama think so too.
We have a MOSL committee in our school, and in August we determined that we would spread the joy. That is, if your department were to be judged by test scores, you would be judged on department scores rather than individual ones. We did not want teachers to be in competition with one another, and we did not want anyone to feel that helping a kid not in your class would somehow have the potential to do you harm.
So, because somehow the decision had not been made in August, or perhaps because John King had a new and even stupider idea than those he'd had previously, we were asked to make a department decision. One person said that since we were all good teachers, we ought to be judged individually. I said that there was no validity to judging any teacher, good, bad, or otherwise by test scores, and that a recent study suggested that variability in test scores was influenced only 1 to 14% by individual teachers.
Given that, I suggested we sink or swim together, and my department agreed. In fact, our MOSL committee had already made that decision. However, there remains the fact that we are gambling on one another, and that while some of my colleagues may do better as a result, others will certainly do worse. Can there be any validity to a teacher evaluation system that actually asks you to throw the dice and hope for the best?
Let's say, for the sake of argument, that I get a bad rating as a result of this. Does that, by any stretch of the imagination, make me a worse teacher? Let's say, again for argument, that I get a good rating because of our decision. Does that make me a better teacher?
Obviously, I am the same teacher whatever the rating is. I am no better and no worse, whatever John King's rating labels me. Firstly, there is no scientific basis to assume any validity to value-added ratings. Second, you need no knowledge of science to determine that shooting the dice and hoping for the best is absurd.
The other thing our department discussed is which test we ought to be judged on. Apparently, we had a choice of that as well. We took the recommendation of our AP, who said that the NYCESLAT results would likely show improvement. However, I teach beginners, and many of my students had never taken this test, having wandered in in September, or during other times of year when the test was not given. She told me then that the NYCESLAT results would be compared with the city's LAB-R test, which was now more closely aligned with the NYCESLAT. I pointed out that last year it was certainly not aligned with it at all. There was not much she could say to that.
So teacher careers are being put on the line and we are given various options as to how we'd like it done. It's ridiculous, but there it is. And despite promises that it would be negotiated by union, it was not, and not one rank and file member got a voice in how this was done.
Personally, I think I could observe a class and make better judgments than any test score, and I'd argue that teacher-designed tests are vastly superior to standardized ones. It was an awful decision on the part of UFT leadership to support this system, and an even worse one to leave it in the hands of fanatical ideologue John King.
The choice is one of which weapon you'd like aimed at your head, in the hope that you will select the one more likely to miss.
Saturday, April 12, 2014
John King's NYC APPR System and the Illusion of Choice
Labels:
APPR,
John King,
NYC Evaluation Decree,
teacher evaluation,
UFT leadership,
value-added,
VAM
blog comments powered by Disqus
Subscribe to:
Post Comments (Atom)