You may have seen this article in Thursday’s New York Times about the breakdown of NYC’s negotiations on teacher evaluations. Or perhaps you read the following New York Times editorial about it. Regardless, all this sturm und drang in New York City revolves around the new teacher evaluation system implemented this year by New York State, called the Annual Professional Performance Review, or APPR for short.
If you look back at my previous post on this (confusingly titled “APPR, CCSS and RTTP. Or, What is the I-ready Test?”) you’ll get a quick summary on how this APPR came into being. (Briefly, it all has to do with requirements Districts must fulfill to compete for Race to the Top grant money.)
The idea behind the APPR (and please excuse me for going all acronym on you – I just can’t keep typing out ‘Annual Professional Performance Review’) is to create a universal system to rate the effectiveness of teachers. By the way, “APPR” only describes New York State’s specific evaluation process – other states have created their own systems and call them other things. But all conform to requirements for districts to compete for a portion of $4.35 billion set aside to help improve education in the Federal government’s Race to the Top grant program. Evaluation systems like APPR have been hurriedly adopted by school districts in 48 of the 50 states.
Clear as mud so far?
The goal behind the APPR is admirable — to make teacher evaluations more meaningful and to make it easier to fire bad teachers. But, the hasty implementation of this new system is causing much confusion and strife.
The formula for rating teachers under the New York State APPR guidelines looks like this:
20% from student scores on NYS assessment tests
20% from student scores on local assessments (in Ossining’s case, the I-ready test in K-5)
60% from classroom observations (announced and unannounced)
So, from the above you’ll note that 40% of a teacher’s rating is based on standardized tests. The rest is based on classroom observations – at least three per school year. Seems reasonable enough on the surface. Because, let’s face it, most jobs have some evaluation process and if workers don’t measure up, they get fired. Simple. Why shouldn’t teachers be subject to the same sort of system?
Well, dig a little deeper and you’ll see its inherent flaws. First off, teacher effectiveness is just not as simple to calculate as, say, sales of widgets or year-to-date mutual fund returns. But APPR tries to do just that – treat student results on a series of standardized tests as a direct indicator of how good a teacher is.
It just doesn’t add up for me. One (or two) sets of standardized tests cannot accurately measure what a child has been taught by a specific teacher. And, there is no way to recognize those subtle variables that affect student performance on standardized tests. For example, what about the kid who missed 4 weeks of school due to a family trip? Or the kid who just doesn’t like math and fills in the multiple choice bubbles on their NYS assessment test to spell “Poop”? (Yes, I might have done that in 5th grade.) Or, more seriously, the kid with special needs who requires more time to take a standardized test, or the rephrasing of questions as defined in their Individualized Education Plan? They are out of luck when it comes to the all-online I-ready assessment that Ossining has chosen to use. And I’m not even going to get into how socio-economic and language factors affect student performances on these tests. But all of these “non-classroom factors” can and do affect a student’s grade on a standardized test. However, in the APPR model, there is no way to reflect that in the student’s test score or the teacher’s final evaluation number.
Thus, it is not inconceivable that good teachers can be graded as “ineffective” while bad teachers can be graded “highly effective.”
My point is that evaluating teachers is a complex issue and the current APPR system is too simplistic and inflexible to do it justice. We should by all means be discussing this and trying to craft better ways to assess teachers. But a hastily implemented, rigid system that imposes a tremendous new layer of bureaucracy on already cash-strapped school districts does not seem either logical or fiscally prudent.
Let’s take a look at how the APPR is affecting Ossining. Just on a fiscal level, I’ve learned that our District has had to spend approximately $300,000 dollars to purchase the most basic software package for the I-ready “local assessment” test. Then, more money has been spent to train teachers and administrators on how to implement and use it.
Add to that the amount of time it takes to administer these tests. For the school year 2012 – 2013, your K-5 student will spend up to two hours at time taking an I-ready test on a computer. They will do this a total of six times this year in September, January & May to complete math and ELA I-ready assessments. If my math is correct, they could be spending up to 12 hours taking local assessment tests. That’s 12 hours in which they could be learning something new, not just being tested on what they should know. It almost doesn’t make sense. (And add to that the nine hours of NYS assessments that 3rd – 8th graders are already taking. It’s as if they’re missing an entire week of learning.)
But don’t just take my word for it. Below are several well-written and thoughtful articles that deal with this thorny topic. Read them and come to your own conclusions:
“APPR Regulations Poison ‘Spirit of Collaboration’”
This is an excellent, well-thought out review of what’s wrong with APPR and other similar methods of assessing teachers.
“Right Task, Wrong Tools: The Flawed Appraisal of America’s Teachers”.
A calmly argued explanation of current trends in teacher evaluations. This article clarified for me why using current methods of standardized tests are “dysfunctional when employed to evaluate teachers.”
“Five Ways School Reform is hurting teacher quality”
Clear discussion of the different initiatives currently being hurriedly implemented and their effect on our students and our teachers.
A remarkable paper written (and endorsed) by a tremendous number of NY state school principals that clearly and concisely sets forth their concerns regarding the rapid implementation of reforms such as APPR and their possible unintended negative consequences.
“Using Test Scores to Evaluate Teachers is based on the wrong values”
The layman’s version of the New York Principals APPR paper. I think the title says it all. The author is the principal of a very highly rated school on Long Island who also co-authored the NY Principals’ APPR paper.