Shut Up Jasons (and Sandra)!

When the Jasons think about talking about education that should look to this man for some advice. (Photo Courtesy

Maybe it’s the deadlines. Maybe it’s the lack of material. Maybe it’s because they are working for a struggling newspaper, and education seems to be a hot button issue. Our favourite Jasons from the LA Times, who help start the infamous LA Times Project and justified it with their weak online chat, are at it again. I am not sure what their motives are. However, whatever they are, I wish they would just shut up!

The Jason’s (and Sandra Pointdexter – boy did she hitch a ride on the wrong wagon) have written another article just in time for the holidays to further justify their project that uses “value-added” data to determine teacher quality. In their “In reforming schools, quality of teaching often overlooked,” I found myself moving from angry to having pity on the two “journalists.” They were so desperate to make themselves seem on the right side of this issue that they contradicted themselves, use weak arguments, and glossed over important information in typical Waiting for Superman style.

“Despite the best efforts of…principals, and an army of well-intentioned reformers…[Markham has been considered one of the worst middle schools in California].”

While I was reading that line in the article I couldn’t help but hear the classical song the Funeral March playing in the background. The statement that is made here is that the principals and the “reformers” couldn’t do anything to turn that school around. That is a sad fact. However, does that mean it was automatically the teacher’s fault? Aren’t the principals responsible for maintaining their staff? If you are in it for the right reasons, aren’t reformers supposed to be there until the job is done? If the Jasons’ rhetoric reigns true, then no, these things don’t matter. The failure of the Markham Middle School is a function on the ineffective teachers that fill the classrooms in the building. (Sarcasm) What the Jasons failed to do (in their agenda riddled brains) is take a critical look at the principals and the reformers.

The more “journalism” that the Jasons expend on education the more and more it becomes clear that they are unaware of how a school should operate and how the stakeholders in the school play varying roles to make the school a success or failure. In the form typical of their “journalism” a seemingly terrifying narrative is set up:

In the last seven years alone, they tried changing the curriculum, reducing class size, improving school safety, requiring school uniforms, opening after-school programs and spending a lot more money per pupil.

In setting up this terrifying narrative they glossed over the fact that the school had nine principals in twenty years earlier in the article. Any good teacher or principal would know that in order for any reforms to come to fruition, there must be stable leadership. Clearly, this school was lacking stable leadership to make sure that the reforms were successful. Furthermore, seven years worth of various reforms isn’t going to change a school that is dealing with problems that are decades old. However, the Jasons rushed to say that:

“[T]he only thing they didn’t’ do was improve teaching – at least not until last year when layoffs swept away the school’s worst performers and test scores jumped.”

The expediency to make this point is supposed to set up in the readers mind that the kitchen sink was thrown at the problem, and at the end of the day the teachers were the ones that were the problems. Not the inconsistent leadership. Not the lack of time to see that the “reforms” were developed and implemented from leader to leader. NO! The problem falls squarely on the teacher. This is an absolutely disingenuous and weak argument that has the quality of a high school freshman.

You can add the Jasons to the legions of “edreformers” and “edpundits” that attempt to create this picture that political, historical, economic, and societal injustices and inequalities play a small role in schools and the lives of the students that attend them.

Markham, a maze of brick bungalows in one of the poorest and most crime-ridden parts of south Los Angeles, was not always considered a failing school.

Tucked away in a dusty storage area above a sixth-grade science classroom are several boxes of trophies from the 1980s that honoured the school for its academic prowess.

By 1991, however, the school’s test scores had fallen far enough to inspire a turnaround effort.

It is quite hysterical that the Jasons are trying to compare the contemporary Markham to the 1980s Markham to make a argument about the “good old days.” (In their defence this is a sentiment shared by many “edreformers.”) The most ludicrous comparison is “1980’s plaque standard” to the “2010 standardized test standard.” In other words, plaques were enough to prove academic prowess in the 1980’s, but we absolutely need standardized test and value-added data to make that determination today. That is essentially comparing grapes to watermelons (in the scale of the hype giving to standardized tests.) The second fallacious comparison is historic in nature. It is no mistake that this school was doing well (using the plaque standard) in the 1980’s, but then fell behind by the 1990’s. The 1980’s were also the years of the Regan-Bush era, which saw the end of federal involvement in equalitarian education and focused on the rights of individuals rather than the needs/rights of minorities. Furthermore, they created an apathy towards “the common,” arguing that it was a threat to liberty. This ultimately translated into a severe scale back in government spending for education, which was historically the case from Regan to Bush and through the Clinton years. This is all while white-flight was at its highest, many cities/states were struggling to reinvent themselves politically and economically after deindustrialization, and the country dealt with trade policies that were not favourable to economic development. However, the Jasons didn’t mention any of this. The Jasons set up narrative that seems to blame the teachers for the decline of the Markham School. They left out that as a consequence to conservative fiscal policies, many schools (including Markham) had unfavourable work conditions, lack of resources, and by the Jasons own admission – inconsistent leadership.

If credit is to be given to the Jason’s for their acknowledgment of history and the conditions upon which students live, it could all be boiled down to these two sentences.

“Research has shown the [standardized test] results are largely a reflection of socio-economic status: poor and minority students often start school well behind their wealthier counterparts.”

“When Watts has the same things as Breattwood does, then you might have equal scores,” said Markham English and Social Studies teacher Teresa Sidney.“

However, just because they mentioned it doesn’t mean they particularly believe it or really understand it. That became clear in their very next sentence:

Short of that, those students need to learn at least 1 1/2 years’ worth of material for every year of schooling to erase the achievement gap, experts say.

In this statement the words that should cause the most pause are “short of that.” They are essentially saying that short of making sure that two schools have the same number of resources and opportunities, the student need to learn at least 1 ½ years worth of material. That is like saying short of the fact that one baker has spoon and another has a mixer both must make 50 delicious cakes in the same amount of time, and that this precedent if acceptable. The Jasons are right; the students probably do need to learn 1 ½ years worth of material. However, is it not a valid statement that schools should receive equal funding/resources? Isn’t there an injustice that some schools get more than others, but all schools must “perform” at the same level? To the Jasons I guess not. To the Jasons, finding a score that “largely controls for socioeconomic status” is a better solution that finding solutions to helping students improve their socioeconomic status. Moreover, the omnipotent teacher must teach 1 ½ years worth of information with resources largely reflective of the students socioeconomic status.

The Jasons are simply beating a dead horse. They have decided to take the oldest marketing trick in the book and doll up inaccurate, unsupported, and in some case ludicrous arguments in the “stories sell” motif. It doesn’t matter how many stories they tell of individual “success,” there is still a problem around the country that is deeper than what the Jasons try to make believe is easily corrected. While I believe that everyone involved has to take a “no excuses” stance towards fixing the problems with America’s schools, that mantra has to be consistent in understanding, unveiling, and addressing ALL the issues that got us here in the first place. This includes teacher quality. I am convinced that the people involved aren’t ignorant to this; I believe what George Carlin is saying in this video:


Stories Sell: The Jasons and Doug Covered their Eyes and Put their Fingers in their Ears, while Telling a Story

The Jasons and Doug strike again in their new edition to proving the validity of Value Added Assessment in  “LA’s Leaders in Learning” published on August 21st. As I read through the piece, I was reminded of a lesson from one of my former bosses when I was in college. I was asked to create an advertising campaign showing the benefits of financial literacy for college students. My boss told me to keep one thing in mind, “Stories sell.”

It seems that the Jasons and Doug continue to cover their ears and close their eyes to the AMPLE evidence that their project is flawed. They used their most recent article to give you more anecdotes to get the reader to ignore the facts. Photo Courtesy:

I am not sure if my boss met the Jasons and Doug, but they surely got their information from the same school of thought. The striking thing about there most recent article is that it seemed like they were trying to convince themselves of what they were saying, but I digress.

Let me tell you several stories.

There is ample evidence that the use of any “high-stakes” assessment model to retain students, sanction schools, and punish teachers has lead to the increase of cheating incidents – most recently in Atlanta. This evidence has been well documented going back to the days before value-added became a prominent area of contention among educators:

Teacher’s Caught Cheating – CBS News 2003

Analysis Shows TAKS Cheating Rampant – Dallas News 2007

Under Pressure, Teachers Tamper with Test – New York Times 2010

There is ample evidence that the use of valued-added assessments in regards to assessing teachers is still largely flawed and needs further development:

Study by Kane & Staiger, 2002 for the U.S. Department of Education

Valued-Added Assessments have estimation errors from two sources.
1. Random differences across classrooms in measured factors related to test scores including student abilities, background factors, and other student-level influences.

2. Idiosyncratic unmeasured factor that affects all students affects all students in specific classrooms (e.g. dog barking or a disruptive student), which reflects year-to-year volatility

David Cohen uses satire to drive in the errors of the Valued-Added Assessment

There has been ample evidence that shows that the use of “high-stakes” assessments and analysis models (whether its API or VAA) lead to limiting of the curriculum to the “testable” subjects/skills along with other effects to students:

Analysis of Urban Schools by Diane Chung-University of Michigan

Problems with the goals of NCLB

While these stories only cover a small portion of the mounting evidence used to evaluate high-stake tests and assessments (which what VAA would be used for), the Jasons and Doug are intent on using their stories to prove their point.

The Jasons and Doug in many ways are contradicting themselves through their own rhetoric albeit implicitly. In their most recent article, they dedicated a significant portion to scrutinizing the limitations of API or the Academic Performance Index used by California to “rank” schools in accordance to No Child Left Behind (NCLB).

“The API obscures the fact that students at Wilbur had the potential for further growth that went unrealized. Instead, they tended to slip every year while those at other esteemed schools in well-off neighborhoods made great strides. It also obscures the gains in schools in impoverished areas.”

“In elementary and middle school, the 1,000-point index is based entirely on how high students score on the state’s annual tests, given in Grades 2 through 12. According to state data, 81% of the differences among schools reflect socioeconomic factors such as poverty and parents’ education.”

Scores like API were developed and implemented partly due to NCLB. Diane Ravitch discusses the many prongs of NCLB in her most recent book The Death and Life of the Great American School System. However, one of those prongs were particularly important here:

“All public schools receiving federal funding were required to test all students in grades three through eight annually and once in high school in reading and mathematics and to disaggregate their scores by race, ethnicity, low-income status, disability status, and limited English proficiency. Disaggregation of scores would ensure that every group’s progress was monitored, not hidden in an overall average.

That last line is particularly important here. The development of API was supposed to ensure progress was monitored. The monitoring of the progress was supposed to cut across the various population groups. In other words, this was an attempt to “control” for the outside circumstances that a student may deal with during and after school. To the Jasons and Doug’s own admission that is not what API has done.

“But even those who designed the API more than a decade ago…. recommended measuring student progress as soon as possible.”

“The superiority of looking at student growth was recognized from the very beginning,” said Ed Haertel, a Stanford professor and testing expert who helped develop the API for the state. “It’s much more sensitive and accurate than the current system.”

But California, like most states, isn’t doing it.”

They suggest on their “More on the Value-Added Method” page that “[v]alue-added measures individual students’ year-to-year growth. The API compares the achievement level of one year’s students with those in the same grade the previous year. Experts say both are important and tell parents different things about a school.” The question that comes to my mind is how is a parent supposed to choose which one is more important API or VAA?

This is the issue I am having with the entire LA Times “study” on education. I often get the image of a small child closing their eyes and putting their fingers in their ears. Despite the fact that API was all the rage almost ten-years ago and it is shown (even by their own analysis) that it has problems and limitations, they insist on using a new “trendy” statistical model for evaluating schools and teachers. It is clear that there is NO evidence that VAA will help close the achievement gap for the various populations that were “tackled” in the development of the first statistical model. However the use of stories about Wilbur Elementary School “starting out as high achievers but [losing] ground on state standardized tests” is supposed to convince us otherwise. Maybe they will use the story of Maywood Elementary and the high population of students on free/reduced lunch improving the most. Either way, the LA Times study is a disingenuous attempt to improve education and is offering convoluted and inflammatory rhetoric about teachers and schools under the guise that empiricism will save the day. Unfortunately in this case, the “facts” that the Jasons and Doug are using has done nothing but make the misinformation about the nature of schooling, teaching, leadership, and testing more prevalent.

It was issues like this that I wanted (and I am sure others wanted) to discuss in their “chat” last Thursday. However, they circumvented the concerns of many in order to replace it with tired rhetoric and ambiguous statements.

Unfortunately, they still haven’t changed since the “chat.” In the most recent article they said, “The approach generally doesn’t penalize schools for things beyond their control.” I generally don’t eat fish (because I am a vegetarian) but if there is nothing else available I will. Marriages are generally perceived as a life long commitment, but people get divorced.

I really hope that their true intentions surface as a result of this project.

Don’t Pee on My Leg and Tell Me It’s Raining: Jason Song and Jason Felch Leave’s Questions Unanswered about LA Times Project

Title Source: Judge Judy’s Autobiography Title

This picture sums up my feelings during the Felch/Song chat from Thursday.

Have you ever watched paint dry? Have you ever watched a spider spin its web? Have you ever watched a documentary made in the 1970’s? I would imagine that for at least the former two most would say “no.” I would also imagine that it would be seem as something utterly boring and a complete waste of time. Well, today we can add the “chat” with the authors of “Who’s Teaching LA’s Kids,” Jason Song and Jason Felch.

Initially, I was excited to be able to have a very candid, but informative discussion about the purpose, rationale, and plans in regards to their article from Sunday. However, that is not what I got. That is clearly not what Sabrina from the “Failing Schools Blog” got either. (If you have sensitive eyes you should skip the next sentence). That hour was the longest disingenuous attempt at a masturbatory “conversation” I have witnessed in a long time. It was more upsetting, because it was supposed to be coming from journalists, whose job is to be transparent – they failed miserably.

Cherry Picking Questions

It was clear that Sabrina attempted to ask some questions that many may have had on their mind, but as you can see from her tweets, Song/Felch cherry-picked the questions that they wanted to answer and had no interest in a substantive debate or conversation.

“I’ve sent 4 comments so far, all asking about the possibility of misleading the public by offering no context. 1 fragment made it through.”

“I think I’m up to 8 comments submitted, and one “pending” on the site itself. #headdesk”

“9 comments. Instead we get softballs like “will you do more profiles in the future?”

As journalists, I would have expected that the Jasons would have been more transparent in their intentions, furthermore more clear and concise to the answers to the chat member’s questions. However, the entire hour felt like a political stump speech. The few cherry-picked questions that were answered did very little to address the concerns that exists across the blogosphere.

Issues with Valued-Added Assessments

One of the many concerns that came up across the blogosphere is the limitations of value-added assessments. It became clear throughout the “chat” that Felch/Song was intent on circumventing the actual issue by either justifying it with “expert analysis” or using ambiguous words to describe why they are still effective in assessing teacher performance.

[Comment From Clay Landon]
Diane Ravitch wrote on your paper’s editorial page that a standardized test is an awful way to measure a child’s educational achievement. Why do you feel such a test should be used as a measurement of teacher effectiveness?

Jason Song:
One thing all experts said is that value-added analysis shouldn’t be the sole factor in determining a teacher’s effectiveness. Most districts that use it count on value-added for 50% or less. But the test are aligned to state educational standards, so many feel they could be a valuable evaluation tool.

How does this answer Mr. Landon’s question? The mere fact that the tests are “aligned to state educational standards” is not an all-inclusive answer to demonstrate the benefits of the use of tests as a means for evaluating teacher effectiveness. Additionally, the answer was not all-inclusive of the potential problems that may arise from using test scores as a measure of teacher effectiveness. The answer Mr. Song gave to this question clearly indicates that he is possibly aware of the problems, which is why he wanted to make clear that “most districts…count valued-added for 50% or less” on evaluations. If he was not acknowledging the problems, he clearly didn’t make it clear he wasn’t.

One of my favourite comedians in the world is George Carlin. Since his early days of being a comedian used the phrase (a phrase I use in my classroom) that “language will always give you away.” I love that phrase because it is concise in saying that the words you choose reveal the way you are actually thinking about things. It was clear that Mr. Song and Mr. Felch knew that there were many reservations (including from education statistics experts) of the value-added approach. Despite the knowledge of the reservations they insist on using it and justified it by detailing their “methodology.”

[Comment From Kev]
How can factors such as severely emotional chaos at a student’s home and local gang activity, for example, factor in to a teacher value? My wife teaches in Highland Park where very few parents seem to really care that their children are learning, and certainly don’t care if they advance past their own education which is frequently illiterate.

Jason Felch:
The value added approach has limitations, but its strength is that, for the most part, it does score judge teachers by who their students are — something they have little control over. How can it do this? By measuring student progress against their own past work, not that of other children. Statistically speaking, this controls for many of the factors beyond a teacher’s control: student poverty, limited English, chaos at home, etc. The assumption underlying the approach is that many of these factors are consistent in a student’s life…if a student was poor in 2nd grade, they’re likely to be poor in 3rd.

If my student wrote a paper using the words “for the most part,” “controls for many” and “assumption” I would question their argument as something that they aren’t clear or convinced of themselves. I believe that these answers speak to my blog piece “Editorizing Fear.” There is a convoluted dismissal of the research that shows that it is difficult to control for many factors that a teacher may have to deal with on a daily basis, and they justify it by following it up with their “research findings.” Clearly, Mr. Song and Mr. Felch are advancing an agenda one, that prays on the desperation of a chronically failing urban school district and using the teacher as scapegoats. (P.S. You should read @TeacherSabrina’s piece “Scandalize their Name” for more on the scapegoating).

Effects with Releasing the Scores

One of the most ubiquitous concerns across the blogosphere concerning the article and the LA Times project is the releasing of teacher’s value-added scores. Many of the concerns centred around the fact that releasing the sores could cause undue hostility from parents, some who are ill-informed to properly analyze the data. Once again, Song/Felch does little to address these concerns.

Given that parents generally want the “best” teachers, do you worry that this will generate more parental pressure on teachers and cause more teachers to “teach to the test” rather than focusing on providing a more wholistic learning experience?

Jason Song: It could generate parental pressure, but, as I said before, value-added analysis is only part of the picture. It only measures performance on math and English standardized test scores right, so if a parent wants to make sure their child gets exposure to art or other subjects or a teacher’s classroom manner, they’ll have to find other sources of information or visit a school.

[Comment From Effective Teacher]
Jason and Jason — The two of you are doing profound damage to select LAUSD teachers. Your cautionary notes that this data only provides one facet of the true picture will be lost on parents who see their children’s teachers labeled as ineffective.

Jason Felch: And what about all of those incredible teachers who, like one we featured in the story, are eating lunch in their classrooms, unrecognized and unstudied? In the end, we came down on the side of publication. We’re trying hard to put the data in context, and giving all teachers an opportunity to provide additional context or comments before the scores go live.

Mr. Song was completely nonchalant when addressing the issue of teachers possibly teaching to the test. I guess to Mr. Song that fact that releasing the value-added scores “could generate parental pressure” is not much of a problem. Apparently, Mr. Felch doesn’t seem to care either. Mr. Song’s weak justification for this is “valued-added analysis is only part of the picture.” However, he has done little to show that valued- added assessments are only part of the picture. He essentially editorialized VAA as the best measure to assess teacher performance. Parents, who are not equipped to analyze the data, are then left to interpret something that is flawed and in his words not complete. However I guess it’s okay for a teacher to get fired while a parent “make[s] sure their child gets exposure to art…[and] a teacher’s classroom manner…[through] other sources of information.” Mr. Felch’s weak justification is that “we came down on the side of the publication.” How is this journalism? Isn’t journalist supposed to supply the whole picture? At least he was honest, coming down on the side of the publication means choosing what will sell. With the current demonizing of teachers in the media, it is clear that any “parental pressure” or that the “true picture will be lost on parents” is just a negative externality.

Journalist Education Credentials

[Comment From LA Times Subscriber]
What credentials do you hold to judge the teaching profession? Do you guys have degrees in education?

Jason Song:
I don’t think many people would label either me or the other Jason as an educational “expert.” But we observed many teachers multiple times and have researched value added analysis for almost a year and checked our findings with leading experts, so I feel we’re on solid ground as journalists.

[Comment From Guest]
They should actually teach for a year, too. 😉
Jason Felch:
I thought you’d never ask! Before going into journalism, I taught middle school and high school students. I also founded and ran an after school program for “inner city” kids in San Francisco, and am very familiar with their challenges. My colleague Jason Song has covered the city’s schools for years.

It was clear that Mr. Song and Mr. Felch tried to maintain their image as an authority on the matter. Mr. Felch even went as far as saying that he taught middle school and high school students before he became a journalist. I am not negating that it is true. However, it is clear that Mr. Song and Mr. Felch are not aware of the daily realities of a teacher, especially a teacher in a urban school. Why? Because of the answer to this question.

[Comment From Mr.G]
What are your plans for “Grading the Administrators”?

Jason Felch:
Good question. Research shows that teachers are the most important influence on a student’s learning. But principals are also important. Many also used to be teachers. We’re exploring ways to reliably get at this with the data. Stay tuned!

I guess that Bill Gates and Jason Felch are looking at the same “research” when making the dubious claim that “teachers are the most important influence on a student’s learning.” Any teacher knows that they are very influential on their student’s learning, but that there is a lot more that influences children. What happened to the old adage, “It takes a village to raise a child?” However, when talking to teachers in affluent suburban school districts, magnet urban schools, and some private schools they all sight many factors including parent involvement, home status, and personal issues. It is and will always remain a RIDCULOUS notion that teachers are the most important influence on a student’s learning. Especially when teachers have to combat with many different influences including some of the horrifying ones detailed in the “Failing Schools” blog. If there is going to be an examination of teachers, then there should be concurrent examination of administrators. Not only do they hire the teachers, they set the tone for the school. Read “Stepping Up to the Plate” by my good friend Erin for more information on that.

I think my tweet and Sabrina’s tweet sums up the sentiment of the entire “chat” from Felch and Song.

@mppolicy: The #LATimes chat was one of the most unproductive hours of my entire life. #smh #ineedsomeair

@TeacherSabrina: Wow, that was a total sham. I wish I’d saved the comments I submitted in vain.

Shake my head Felch/Song.