Tuesday, November 14, 2017

The Difference Between An Ineffective And Unsatisfactory Rating























I have been following the nyc educator blog with great interest and in a couple of his posts he mention that he asked the UFT leadership various questions about how many members received an "Ineffective" and "Unsatisfactory"  rating.   The reason nyc educator brought up the issue is that the UFT leadership, in trying to sell how wonderful the teacher evaluation system (APPR) is, compared to the old "S" and "U" system.  It's like trying to sell a broken down mule as a race horse.

According to the UFT there were 3,000 annual unsatisfactory observations.under the old "S" and "U" system, compared to only 217 ineffective ratings last year.  The problem is that its like comparing apples and oranges.   Under the old "S" and "U" system a teacher can get an "unsatisfactory" rating for many different reasons by simply getting a letter to their file for that year, despite being rated an effective teacher,  For example a teacher has an altercation with a student and despite the student being the aggressor,  the Principal dumps a letter in the file for corporal punishment on the teacher who was just trying to defend himself.   The result is an unsatisfactory rating for that year.  Another example is that the Principal told a teacher to do lunch duty, even when no teacher was assigned lunch duty as their circular six assignment.  The teacher said no and was charged with insubordination and a letter to the file which gave the Principal the right to give the teacher an unsatisfactory rating.  Many ATRs received an unsatisfactory rating simply by getting a letter to the file for trivial  offenses like making a silly joke, or showing up late to a classroom in a new school.

By contrast an "ineffective" rating is based solely on the pedagogy and not on alleged misconduct.  An ineffective rating is determined by the teacher's classroom ability and how well his or her students growth factor is, based upon the "junk science" of high stakes testing.

Of the 3,000 unsatisfactory ratings (it turns out to be 2,000).  How many were based on incompetence and how many for alleged misconduct?   Moreover, an unsatisfactory rating still keeps the burden of proof on the DOE in any 3020-a cases.  On the other hand, ineffective ratings are entirely based on alleged teacher incompetence and the burden of proof is now on the hapless teacher to prove he or she is not incompetent.  A high bar to jump over.

In conclusion. comparing unsatisfactory ratings with ineffective ratings is more like comparing a misdemeanor with a felony.  They are both negative but one is so much worse than the other.


27 comments:

Anonymous said...

Let's be 100% clear here: Each and every UFT rep is still on the old "S" and "U" system. If it is so bad, why are they rated by it? Go ask the next UFT rep who shows up at your school which type of evaluation system they prefer.

Anonymous said...

A developing rating might not get you fired but it does 2 things: 1) It gets you a TIP (Teacher Improvement Plan) 2) It gives you a sense of extreme low self esteem. It is a depressing, horrible thing to look in the mirror each day and think "Even after 20 years of teaching, I am now considered a developing teacher". Years of experience don't mean squat anymore. Teachers are like porn stars, you are only as good as your last scene.

Anonymous said...

The evaluation system does not mean anything. In many wrong hands it is being abused.

Anonymous said...

Please note that anytime a Principal enters a class for the purpose of rating a very wide range of things are observable. If the principal or rating officer " wants to set you up" they can choose only the observable evidence that support ineffective. Another rater can enter the same class and rate the same teacher for the same lesson as effective simply based on what this latter observer wishes to observe. The only conclusion left is that as an instrument for measuring this one is sadly imprecise. It reminds me of a cheap ass bathroom scale you buy at Duane Reade for $19.95; today you weigh 130 lbs and tomorrow you are 160 lbs on the same instrument. What is it really measuring???

Anonymous said...

A Developing rating also means people see it and believe it. After all, it must be true if an administrator wrote it.
It's a pack of lies.

Anonymous said...

Chaz, I think for an untenured teacher it really does not matter that much but for a tenured teacher it is preferable to be rated under the old system as oppose to the new. I think that the old system was more fair in many ways (that is not saying much though)

Anonymous said...

There are teachers who got 2 years of developing ratings
and got a 3020a.Lost the 3020a and was fired.The reason the arbitrator
wrote no improvement and the teachers are not effective. Be careful.

Anonymous said...

9:38,

That is exactly right. Depending on who the observer is, one person's effective could be another person's developing or even ineffective. As flawed as the old S and U system was, there was less room for pedagogy manipulation. Whether the administrators wanted to or not, they were duty bound to rate teachers based on less subjective factors such as room cleanliness, personal appearance, punctuality, etc. Danielson was based on the assumption that it will never be abused or misinterpreted.

In my fourth year I was being set up and I ultimately lost my job over this bullshit. I knew that I was being set up and I knew what the final outcome would most likely be because it was that damn obvious. I received nothing but in-effectives in my fourth year. In my third year when the DOE switched over to danielson from the old S and U system, I received all effectives. Unfortunately my administrators had other plans for me in my fourth year. Other teachers who did an inferior job (some of them didn't even have their lesson plans with them when they got evaluated) would get effectives on their observations by that same observer. But when that same individual came into my room, she was purposely entering (with the intention) of finding any little thing that she could solely for the purpose of justifying any possible ineffective that she possibly could. The other teachers weren't doing anything different or better than I. It was just that I was the one being targeted and the other teachers weren't, hence the reason for the huge disparity in our ratings.

Just to illustrate how ridiculous and crazy this system can be in the wrong hands, I would like to share a personal experience where I received straight ineffectives for a lesson that I strongly believe was deserving of all 3s and 4s. By anyone's estimation this lesson would have been easily rated as effective by anyone who wasn't under the marching orders to rate the teacher poorly. During my formal, my old observer came in and stayed for the entire 45 minute period. There were 13 kids in the room that period, and almost every single one of them were outstanding in every measurable way that day for the entire duration of the period except this one student who came in ten minutes late and who was obviously distressed about something that happened in another class that had nothing to do with me. He ripped up a piece of paper and put his head down. Myself, several paras, and even the observer herself approached the student and he just shut down and would not talk to anyone. Because of the fact that observer intentionally wanted to rate me as ineffective, she chose to focus on just that one student as her justification as an ineffective in that particular domain to the complete exclusion of everything else that she saw.

Anonymous said...

If this so called instrument to measure teacher effectiveness is so easily dependent on the observer..can you tell me what is it REALLY measuring?

Anonymous said...

Depends on the administrator, not all of them are acting in good faith.

Anonymous said...

so if the instrument rely so much on an administrator then not only is it lacking in precision but rleiability as well. In this post modern era of sophisticated learning why would a group of professional educators impose this upon themselves?

Anonymous said...

That's why I don't get upset over it in retrospect. I am upset that I lost my job, but I am not upset about the fact that I got an ineffective because it is a bullshit theory based on bullshit practice. Interestingly in my current school which is non doe school we also use Danielson and once again (would you know it?) My scores are back to effective and highly effective as they always were and should have been that year.

Anonymous said...

But that's my whole point when you rating system where much weight falls on who the observer is, then it doesn't make sense for that rater to be rate the teacher on purely subjective things

Anonymous said...

Abusive Principals and Field Supervisors are using the observation process in an arbitrary and capricious way to target older teachers.

Anonymous said...

Many of them are lying, and there is no checks and balances on Principals like Dwarka.

Anonymous said...

It's the UFT's fault because they have not clearly defined the rules on how it is to be used.
For example, all observable sections should be rated. Also Evidence should be provided when a claim is made.
All ratings that don't follow the rules of fairness should be thrown out.
So the UFT is allowing them to lie and have not provided us with any avenue that leads to assistance except for 3020a. You shouldn't have to go the the end of the process to get someone to stop lying.
This is a violation of due process. Perhaps one day a group of teachers can sue them over this. They made an agreement on our behalf that violates due process.

Anonymous said...

I know that Charlotte Danielson is on record saying that she 'never' intended her system to be used to rate teachers, but has anyone got in her face and told her off yet? Her 'system' is so poorly thought out (it relies on admins being fair and just, which is like trusting foxes to guard hen houses).

I have seen it abused many times. I was abused one year by an admin and her sidekick when she decided she hated me for no reason at all. This after two previous years where she loved me and rated me highly. Same lessons all three years too. How do they suddenly become 'bad?'

The system is like a hammer put in the hands of a petulant child. If they like you, they circle higher ratings on the rubric, regardless of what you do. If they don't like you, you could be the top teacher in the world, doesn't matter, they will circle lower. Who can question them? There are no checks and balances.

One of my fellow teachers (a gym teacher) got some 1's on his evaluation last year. Mind you he is a master gym teacher. He had a meeting with the AP and argued with her until she raised the ratings to 2's. Not all of us have the time or chutzpah to do that. The MOSL is the only thing that has saved a lot of us older teachers. My current admins love me, but it's a new year and I just got observed and they are showing poker faces to everyone.

Few other jobs have the stress levels that ours does! If only 30% of teachers make it to retirement, I can understand why. I take it year by year now. I do all the per session I can to pay off debts and be ready just in case they want to do a hit job on me.

If a 'loved' teacher like me feels this way, how does everyone else feel?

Highly Effective King Clovis said...

I feel for people who get rated unfairly. Getting rated developing sucks. For 3 straight years, after the end of S/U I was given developing. Always missing effective by .2-.4 points.

The last two were provisional appointments, by incompetent admins who had no intention of giving many effectives.

Last year and this year, I chose not to take any appointments. Somehow I'm Satusfactory again. Huh.

Anonymous said...

5:45, The "evidence provided" can be manipulated and represented in a way taking it totally out of context. This happened in my situation. The idiots who agreed to danielson as the primary teacher evaluation tool, were under the assumption that it would be used by a fair minded and reasonable evaluator, who would look at all the facts and all the evidence within proper context. Unfortunately the truth as we all know it is that t's being used 180 degrees in the opposite direction.

Aside from the butchering of the evidence the number of actual domains chosen to rate the teachers were not equal either. In addition the domains themselves were not the same. Some people were evaluated under 8 domains, some 3 and some 4. (This was for the same round of observations)

So the evaluators had complete autonomy in manipulating the evidence, number of domains, and types of domains in the observations.

Anonymous said...

Not reliable. Hmm. I like that phrasing.

Anonymous said...

First of all the " evidence " collected is not at all objective put merely something a principal or rater says or worse interprets. I feel it for poor teachers who have to be subject to this bs. Is there anything that can be done at this point to show member disapproval of this "NON" instrument ?

Anonymous said...

The evaluation system can be manipulated to target teachers for terminarltion. Field Supervisors so it all the time if they have a clear agenda against ATRs.

Anonymous said...

It is a complete sham the rating system.

Anonymous said...

Unfortunately there are many administrators acting in bad faith.

Anonymous said...

At least under the "S" and "U" system more things counted that also really matter. (Room appearance, attendance, etc) Now everyone is all rated by only 7 components in Danielson. Remember that not everything that counts is being counted today. The fact is that our current evaluation system is a nightmare. Too many observations and the totality of our teaching is not being looked at. Teacher morale is at an all time low and the evaluation system is one of, if not, the main reason for this.

Anonymous said...

There are no checks and balances on administrators acting in bad faith. Our UFT is responsible for that.

Anonymous said...

Totally agree the UFT has been useless for years. They keep looking the other way.