Back to News & Commentary

Chicago Police “Heat List” Renews Old Fears About Government Flagging and Tagging

Modification by Jay Stanley of photo by Nestor Lacle via Flickr
Modification by Jay Stanley of photo by Nestor Lacle via Flickr
Jay Stanley,
Senior Policy Analyst,
ACLU Speech, Privacy, and Technology Project
Share This Page
February 25, 2014

The Verge had a story last week (expanding on an August report from the Chicago Tribune that I’d missed) that the Chicago police have created a list of the “400 most dangerous people in Chicago.” The Trib reported on one fellow, who had no criminal arrests, expressing surprise over having received a visit from the police and being told he was on this list. A 17-year-old girl was also shocked when told she was on the list.

The database, according to the Verge, is based on historical crime information, disturbance calls, and suspicious person reports. The CPD’s list is heavily based on social network analysis (which is interesting considering the debates now swirling around the uses of metadata and the analysis such data enables). A sociologist whose work inspired the list, Andrew Papachristos, told the author of a Chicago Magazine piece (which goes into some interesting depth on some of the theory behind the list): “It’s not just about your friends and who you’re hanging out with, it’s actually the structure of these networks that matter.”

The list was funded through a Justice Department grant known as “Two Degrees of Association.” (At least that’s one less hop than the NSA uses.)

I’m still consistently surprised how often things we worry about in the abstract actually show up in the real world. For years, privacy advocates have been warning about how databases might be mined by the authorities for information used to label, sort, and prejudge people. True, there are all too many precedents for this sort of thing, including the CAPPS II program proposed early in the Bush Administration, the nation’s terrorist watch lists, various police gang lists, and the Automated Targeting System. The TSA’s Pre-Check whitelist is also a cousin of this kind of program. All are based on using various information sources and grinding them through one or another logic engines to spit out a judgment about individuals and their supposed dangerousness or safeness as a human being. But still, this program amazes me in how starkly it replicates the kinds of things we have been warning about in many different contexts.

Just two weeks ago, for example, I was asked by several news outlets what we think about police officers using Google Glass. I told them that Glass is basically a body camera, and that the issues were the same as those outlined in our white paper on police use of that technology. The principal difference between Glass and the body cameras being marketed to police is that Glass can also display information. I said this shouldn’t be a problem—unless (I added almost apologetically because of the slightly fanciful nature of this point) the police started using them with face recognition to display some kind of rating or warning for individuals who have been somehow determined to be untrustworthy.

“Of course, that’s not a problem today,” I said, “it’s more of a futuristic concern.”

Ha! Barely a week later, that scenario doesn’t seem so futuristic any more to me, especially at a time when some want to use face recognition to warn them when someone on a blacklist tries to enter a store or school. (True, Google doesn’t currently permit FaceRec apps on Glass, but it’s unclear how long that will last.)

Some further points and questions about Chicago’s heat list:

  • The principal problem with flagging suspicious individuals in this way may be the risk of guilt by association. Although we don’t know how valid, accurate, and fair the algorithm is, it’s important to note that even if its measures were valid statistically—that one particular individual really does have an increased risk of crime because of certain things about his or her life—it may still constitute guilt-by-association for a person who actually remains innocent. It is simply not fair for people to be subject to punishments and disadvantages because of the groups they belong to or what other people in similar circumstances tend to do. I keep going back to the example of the man whose credit rating was lowered because the other customers of a store where he shopped had poor repayment histories.
  • Why should the police restrict their hotlist to 400? Why not 4,000 or 40,000? In fact, why not give every citizen a rating, between 1 and 100 say, of how “risky” they might be? Then the police could program their Google Glass to display that score hovering above the head of every person who comes into their field of vision. This is a path it’s all too easy to see the police sliding down, and one we should not take even the first steps towards.
  • Remember too the point that (as I made here) there are a vast number of laws on the books, many complicated and obscure, and anyone who is scrutinized closely enough by the authorities is far more likely to actually be found to have run afoul of some law than a person who isn’t. In that respect inclusion on the list could become a self-fulfilling prophesy.
  • Will the Chicago police carry out any kind of analysis to measure how effective this technique is? Will they look at the success of their predictions, search for any discriminatory effects, or attempt to find out whether these rankings become a self-fulfilling prophesy? The police often have little inclination to do any such things—to adopt rigorous criteria for measuring whether their new toys and gizmos are providing a good return on investment. Purely from an oversight point of view, every aspect of this program would ideally be made public so the world could scrutinize it—certainly the algorithm. Privacy concerns, however, suggest that the names of individuals who are (quite possibly totally unfairly) flagged by these algorithms not be made public, nor any personal data that is being fed into the algorithms.
  • A Chicago police commander is quoted as saying, “If you end up on that list, there’s a reason you’re there.” This framing begs the question at the heart of this approach: is it valid and accurate? Such circular logic is genuinely frightening when it comes from a police officer talking about matters of guilt and innocence.
  • It’s true that there could be a fine line between laudable efforts to identify and help “at-risk youth,” and efforts to tag some people with labels that are used to discriminate and stigmatize. Research on the “epidemiology of violence” could be valuable if used as part of a public health approach to crime. But if it’s part of a criminal justice “pre-crime” approach, then that’s where the problems arise.

Overall, the key question is this: will being flagged by these systems lead to good things in a person’s life, like increased support, opportunities, and chances to escape crime—or bad things, such as surveillance and prejudicial encounters with the police? Unfortunately, there are all too many reasons to worry that this program will veer towards the worst nightmares of those who have been closely watching the growth of the data-based society.

Learn More About the Issues on This Page