Policy decisions about highway safety — which programs to implement? which ones in operation are worth continuing? — should be guided by scientific evaluations of program effectiveness. Few would argue with this. Still, there's the issue of how competent the evaluations are. As in many applied fields, this one isn't lacking for junk science, which sometimes gets into the public domain.
A recent example is an investigation of the effects of red light cameras in Greensboro, N.C. Mark Burkey and Kofi Obeng of the North Carolina Agricultural and Technical State University concluded that installing cameras led to a 42 percent increase in crashes where cameras were located.
"This conclusion flies in the face of every competent study that has been conducted on red light cameras. Study after study has found reductions in both signal violations and crashes," says Richard Retting, the Institute's senior transportation engineer. "It isn't that Burkey and Obeng found a better way to evaluate a measure that already has been assessed by researchers. The reason their findings are so different from previous studies is that the methods of their investigation are fundamentally flawed."
Two main flaws
Burkey and Obeng's purpose was to estimate crash effects at intersections with cameras. One flaw is that they used signalized intersections without cameras in the same community as controls.
"This ignores the well-known spillover effect," Retting points out. That is, the effects of cameras spill over to intersections without cameras. Publicity and media coverage make drivers aware of the general presence of cameras in a community. The result is a generalized change in driver behavior at intersections with and without cameras. This is why assigning signalized intersections in the same community as controls compromises the findings.
A worse problem is that Burkey and Obeng treated data from intersections with and without cameras as if the cameras had been randomly assigned to their locations. In fact, Greensboro officials installed cameras at intersections with higher crash rates — more than twice as many crashes as at other intersections in the city before the cameras were installed.
Burkey and Obeng ignored this difference and concluded that, because crashes at intersections with cameras outnumbered those at the comparison sites, the cameras must be the culprits. But this simply reflects the far higher number of crashes at the camera sites to begin with. A somewhat better approach would have been to look at how crash rates changed at intersections with cameras versus other intersections.
"That approach still would have ignored spillover, but it would have avoided the silly conclusion that cameras increased crashes," Retting points out. He adds that "it wasn't camera placement that caused the higher number of crashes at intersections with cameras. More likely it was the other way around. The higher number of crashes is what caused the cameras to be placed where they were."
Conclusions weren't reviewed by peers
Burkey and Obeng's investigation isn't published in the scientific literature. This means it hasn't been subjected to peer review, which involves critique by impartial experts to determine the validity of the methods and findings.
"Studies, especially ones with findings that contradict a body of existing research, should be subjected to peer review," Retting says. "Doing so provides a study with a sort of seal of competence that says, 'These findings are worth paying attention to.' If Burkey and Obeng's report had been subjected to peer review, the reviewers would have pointed out the obvious flaws. Policymakers should ignore the faulty conclusions of this report."