On November 28, 2011 the Las Vegas Review Journal published an article in which it was reported:
“The nation’s leading law enforcement agency (FBI) collects vast amounts of information on crime nationwide, but missing from this clearinghouse are statistics on where, how often, and under what circumstances police use deadly force. In fact, no one anywhere comprehensively tracks the most significant act police can do in the line of duty: take a life.
“We don’t have a mandate to do that,” said William Carr, an FBI spokesman in Washington, D.C. “It would take a request from Congress for us to collect that data.”
Local law enforcement agencies have guidelines and internal review processes in one form or another, but what they don’t have are compilations of statistics which would facilitate analysis of trends, and problems.
As of July 31, 2012 Mother Jones magazine cited the Review Journal study of shootings in Las Vegas, Nevada, but was still lamenting the lack of national statistics and analysis. Six days ago, August 14, 2014, USA Today reported the results of an FBI study which concluded, flawed though it may be, the FBI database is the best thing we have, and it’s informing us that officers have been involved in approximately 400 lethal events. Again, the problem of statistical gaps raised comment:
“University of South Carolina criminologist Geoff Alpert, who has long studied police use of deadly force, said the FBI’s limited database underscores a gaping hole in the nation’s understanding of how often local police take a life on America’s streets — and under what circumstances.
”There is no national database for this type of information, and that is so crazy,” said Alpert. “We’ve been trying for years, but nobody wanted to fund it and the (police) departments didn’t want it. They were concerned with their image and liability. They don’t want to bother with it.” [USAT]
This goes some distance toward explaining why we’ve not been able to address the issues of officer involved lethal events with any precision. Police departments are reluctant to report incidents with any specificity and quantity, Congress won’t ask the FBI to compile the information, and the blind lead the blind into interminable debates about IF there is a problem, and what the nature of the problem might be.
There are 17,000 law enforcement jurisdictions in the United States, but only 750 contribute such statistics to the national database. [USAT] Here’s why this is a problem:
# Failure to quantify a problem, or to attempt a quantification using a mix of statistical and anecdotal evidence colors any scientific analysis of projections, correlations, and trends. We cannot rationally analyze and evaluate that which we cannot statistically describe.
For example, if we have two highly publicized cases of lethal events, does this mean we have a problem? Is the problem ethnic? Cultural? Is it even a problem? Lacking valid and reliable statistical context none of these questions can be adequately addressed.
# Without a statistical context the anecdotal and the immediate obscure the predictive and the analytical. The argument becomes one of perception, and perception uninformed by any clarification or larger context. When the argument spills into the street the view becomes even more opaque. While the existing statistics do support the assertion that interactions between white officers and black suspects are more likely to be negative, the limited depth of the statistics precludes giving the numbers any range. We have a general sense of negativity, from a limited number of jurisdictions, which leads to more problems.
# Since not all law enforcement agencies are compelled to supply statistics on this subject, there is little predictive value from the numbers we do have. We can study the trends in large agencies, such as Las Vegas Metro, Los Angeles, or New York City, but little can be reliably said of agencies which do not report. Unfortunately, this situation means that smaller, or less responsive, police departments can’t adequately address problems — real or potential — in their environs and jurisdictions.
For example: Let’s create a hypothetical in which there is a major metropolitan police department which does track and report its officer involved lethal event figures. If this is a well administered, community responsive, department then we can reasonably conclude that Megatropolis area has good police/community relations. Further, if a few suburban departments collect and report their statistics, and those, too, are positive, then most community leaders might conclude relations in the overall region are generally good, and in no need of assessment or change.
But, let’s toss a fly in our hypothetical ointment — What if there is a cluster of small jurisdictions in the metropolitan area which do not report, and do not have a demonstrable record of positive interactions with their community members? In this instance, the partial analysis of statistics from a limited sample obfuscates problems which will eventually flare into anecdotal evidence. Or, to put it more simply — into people in the streets and headlines in the newspapers.
It didn’t have to be this way. The International Association of Chiefs of Police began collecting data on officer involved incidents in 1995, and reported in 2001 that there were 3.6 records of use of force for every 10,000 calls for service. [IACP] For the first two years the project was supported by a joint grant from the Bureau of Justice Statistics and the National Institute of Justice, and from 1998 to 2001 the database was funded by the IACP. [IACP pdf] The IACP developed proto-type software for reporting and worked to secure state and local cooperation, but in 2001 the funding dried up and the project halted. [USAT]
Therefore, for the last 13 years we’ve been effectively operating with vision obscured by the lack of hard data. Some law enforcement agencies may have made great strides in terms of community relations — but we’d not see that reflected in national statistics because we don’t have the numbers. There may be some police departments which have trajectory trends in police officer incidents that are essentially negative — but we don’t know this because we don’t have the numbers. There may be some regional problems indicating negative trends in community relations, but we don’t know this because we don’t have the numbers.
There are also reasons for police departments to support the collection of more, and better, data. First, it’s really difficult to fix problems which aren’t acknowledged, and when anecdotal evidence — from either side — is all that’s available it is all too easy to miss trends. Secondly, if Department A is tracking its use of lethal force, while in the next door ZIP code Department B is functioning blithely unaware that it has a growing problem, then it’s reasonable for Department A to be aware of neighboring problems which threaten to land on its own doorstep. And, third, it is all but impossible to objectively evaluate the seriousness of issues such as the use of lethal force, and the efforts made to correct injustices, without a solid, reliable, national database.
It’s high time for Congress to require that the Bureau of Justice Statistics compile and report statistics on officer involved use of force incidents, and to resurrect the IACP project with adequate funding. Otherwise, we’ll continue to blunder in the dark, living witness to the truth of the old adage: There are none so blind as those who will not see.