With varying levels of triumphalism and caution, local news sifts through newly released (and controversial) teacher rankings
Late last week, reporters and editors on the New York City schools beat finally got what they'd been fighting for this past year and a half. Then came the hard part.
Local media outlets, including both city tabloids, The New York Times, The Wall Street Journal and NY1, won a legal battle against the United Federation of Teachers that would allow the city to release some 18,000 rankings of New York City schoolteachers to the press; the data arrived on Friday morning.
The release of the data itself was considered controversial, and journalists were faced with the conundrum of how to weigh the need to get the data up completely and readably (and, of course, very very quickly, since everyone got it at the same time) against the need to contextualize it for what it was.
The stats are based on english and math test scores for the city's fourth through eighth grade student populations; so its usefulness as a measurement of the merits of individual instructors was not a given, in that the meaning of these test scores is itself the subject of strenuous debate.
Nonetheless, this is the data that is used by the D.O.E. to make certain performance-based decisions, such as tenure.
Different news outlets presented the data differently. NY1, for instance, published the data in raw form "so readers can evaluate for themselves the ratings contained in the reports and the DOE's methodology that underlies those ratings," while adding context in its reporting on the subject.
The New York Times took advantage of its collaboration with WNYC on the website School Book to develop "a sophisticated tool to display the ratings in their proper context," wrote editor Jodi Rudoren in a blog post.
"The ratings are imperfect, according to independent experts, school administrators and teachers alike," she wrote, explaining the Times' reasoning for publishing them. "There are large margins of error, because they are generally based on small amounts of data. And there are many other documented problems, like teachers being rated even when they are on maternity leave. But the data figured in high-stakes decisions about public employees, and the debate about value-added ratings is continuing as the city and state overhaul the evaluation process."
The Wall Street Journal also developed a sophistocated online rankings tool (the landing page for which calls out the top 10 math and english teachers on the list) with certain safeguards in place.
"We have very specifically labeled how wide the margin of error is," said Lisa Fleisher, the Journal's local education reporter, on "Good Day New York" this morning. "There's a good chunk of data that you have to kind of look into, and one of the things you're gonna see, sort of the stand-out number that you'll see, represents how well that teacher did compared with teachers in their same year, with the same level of experience, the same level of students. So it's not like you're comparing a veteran teacher with a freshman."
And then there was The New York Post, whose Saturday wood boasted that the names and scores of 12,170 teachers were published within. (The Post reporter listed as the point-person on the coverage did not respond Monday afternoon to an emailed request for comment.)
Post rival the Daily News published a list of names on Saturday, too, albeit a more nuanced one, singling out the few dozen teachers at the very high and very low ends of the spectrum with a one-point margin of error.
"We felt in terms of naming people, that was the fairer way to do it," deputy editor Arthur Browne told Capital. "We had good reason to believe that that information carried more veracity as far as these particular individuals were concerned."
Browne said that in mapping out the paper's overall coverage of the data, a team of at least six reporters and three editors took broad strokes as well as delicate ones.
"In macro, you had the big pictures you might be able to draw, like how did it break down from school to school," he said. "Then you had to transfer the information down to the smaller picture, down to individuals."
The News (where Browne also runs the op-ed page), has also published a range of opinion pieces on the topic, including a skeptical piece by The Manhattan Institute's Marcus Winters, who argued of the teacher ratings: "What’s important now is that New Yorkers read them cautiously. Unfortunately, not everyone is likely to heed this advice."
For the most part, the local press "covered the story from a traditional angle, which is, the data is out and everyone should be careful how to interpret it," said Andrew Rasiej, an open-government advocate and founder of the Personal Democracy Forum (as well as an investor in Capital). "What most of the press has been about is that the data is too narrow to be properly interpretive of any single teacher or trend. That is probably true."
But Rasiej also questioned why there haven't been "calls for other quantitative or qualitative data to be released along side the teacher performance data which might put the teacher data into proper context. For example, how are schools comparing to each other? Or how do we know that the tests that the teachers were evaluated on are worth even taking?
"Basically, what I am saying is that this story is like looking at the world through a pipe," said Rasiej. "To have real transparency we need all data associated with the school system, except personal or security info, to be made available and in open searchable formats like A.P.I.s that can be built upon by others. Then over some period of time, we might be able to build a better and much bigger picture of what is truly going on."