An Independent Voice That Advocates For The Classroom Educator Without The Corrupting Politics Tied To Our Union And DOE Leadership.
Thursday, March 01, 2012
My Suggestion On How To Use A Simple "Value Added Formula" That Would Work For Teachers Who Are Teaching NYS Regents Subjects.
I am a Science teacher with an extensive background in Math and when I looked at the overly complex"value- added mathematical formula" used for the teacher data reports (TDR) and it can give one a real headache. (If you want to see what the teacher TDR"value-added formula" is, you can find it here). Obviously, there is tremendous wiggle room to apply different values for the various error-prone assumptions in the formula. While my fellow blogger JD2715 can figure it out, for the rest of us it is simply "fuzzy math" that turns our brains to mold. Is it any wonder that the "value-added formula" is considered a joke since the average error factors can be so large that a teacher rated "average" can really be rated either high or low. For example let's say that a English teacher is rated 50 (average) by the TDR but the error factor is 53%. Therefore, the teacher could be rated as low as 23.5 (below average) or 76.5 (above average). If we were to use the maximum error of 87% the teacher could be rated as low as 6.5 (the very bottom) or as high as 93.5 (the very top). This does not even account for all the error-prone factors for students that, in many cases, don't reflect reality. Is it any wonder that real educators find the teacher TDR values useless and does not show whether the teacher is realty "effective"?
While I do not want my union to negotiate with the DOE until the awful Mayor Bloomberg is no longer in office, I do suggest that the 20% of standardized test should come from the New York State Regents if we must have a "teacher evaluation system". What I recommend is that in the second week of September all teachers who are teaching classes that end with a June Regents will give all their students the previous June Regents to determine the baseline. Once, the students take the Regents at the end off the year, the two scores will be compared to determine the "value-added" for the teachers. This formula is much simpler and a more accurate indicator of how much the teacher actually added to the student learning. For example, lets say that student "A" received a 35% on the previous June Regents and achieved a 75% in the end of the year Regents. The teachers "value-added" grade for student "A" would be 40 (out of as total grade of 100). In other words, the teacher "value-added" number does not need to account for various assumptions. However, ESL and Special education students will need to be graded differently since my simple "value-added formula" may not be appropriate without some additional studies dealing with these special cohorts. My "value added formula" is as follows.
Value-added = (Post test) - (Pre test).
The only decision is how to determine the "cut scores" and that can be set based upon simple cohort analysis. Is my recommendation the perfect answer to teacher effectiveness? Of course not but it is simpler, better, and not subject to the error-prone assumptions the DOE used for the teacher TDRs and the potential that a similar complicated mathematical formula will be used for the "teacher evaluation process".