Back to News & Commentary

Do Androids Dream of Electric Speech?

Gabe Rottman,
Legislative Counsel,
ACLU Washington Legislative Office
Share This Page
June 21, 2012

Professor Tim Wu at Columbia had an op-ed in the New York Times yesterday arguing against First Amendment protections for “automated” speech. Here’s the argument distilled:

As a matter of legal logic, there is some similarity among Google, Ann Landers, Socrates and other providers of answers. But if you look more closely, the comparison falters. Socrates was a man who died for his views; computer programs are utilitarian instruments meant to serve us. Protecting a computer’s “speech” is only indirectly related to the purposes of the First Amendment, which is intended to protect actual humans against the evil of state censorship. The First Amendment has wandered far from its purposes when it is recruited to protect commercial automatons from regulatory scrutiny.

For the uninitiated, this is all coming up in the context of regulatory investigations of Google’s search engine results, which are looking at whether Google tweaks its search mechanism to favor its own interests. Although the precise method by which Google identifies and ranks its search results is a closely kept secret, we do know that it is based on algorithms that reflect human judgment about what individuals would find most useful.

The ACLU has no current position on the First Amendment status of Google’s search algorithm. That said, and with respect, I think Professor Wu may be jumping the gun, at least when it comes to search engines and his drawing a bright line between “automated” speech, which he says is undeserving of First Amendment protection, and human speech, which of course is.

There are many different “types” of automated speech. In fact, Professor Wu gives three examples at the beginning of his column (GPS navigation, spellcheckers and recommendations for new friends by Facebook). One of these, the route mapped out by a GPS navigation system, serves as a good counterpoint to search engine algorithms to explore the various First Amendment considerations here. (Importantly, I’m not even arguing that GPS results are unprotected; they very well may be. The point that I make below is that the argument for search engine protection is even more compelling in light of the human creativity and ingenuity required to rank results effectively.)

There is a qualitative difference between Google search and a navigation system. At base, the difference comes down to complexity—but it’s more profound than that. When you search for directions between 123 Main Street and 456 Skid Row, the possible routes that the computer is going to spit back are relatively few. Although the computer may “smartly” suggest an alternate route to avoid traffic, the actual process is devoid of human judgment.

Now, when you’re talking about a search algorithm, the actual process may be automated (Google has enormous server farms that are indexing websites, tracking search queries and ranking possible results, without any human intervention). But the underlying logic that generates the best results is a matter of human judgment.

In essence, Google search is a lot like the old text adventure games from the late 1970s and early 1980s (the iconic example being, of course, “Adventure”). When the game’s running in your CPU, there is obviously no human intervention. But, the underlying mechanics of the game are entirely the product of human judgment and intuition. In order to navigate your character in the game world, you need to enter text commands. The programmer needs to creatively anticipate, based on an intuitive understanding of human interaction, what text command a player is likely to enter. Based on that understanding, the commands “travel north” and “go up” would be programmed to achieve the same result (this gets even more complicated when you get into non-directional commands).

This is very different than a GPS system, which relies on entirely natural phenomena to drive its results (it takes no human intervention to calculate the quickest routes once you’ve taken into account traffic, construction, weather and other natural elements).

Another, more fanciful, example (though certainly not outside the realm of near-term possibility, see, e.g. IBM’s Jeopardy computer) is an automated “op-ed” machine. Imagine I could program a computer with various baseline opinions (the computer would know, for instance, that I’m against the death penalty). I could then correlate those opinions with other opinions (an individual who opposes the death penalty probably has a problem with solitary confinement). Based on those human-derived linkages in the computer algorithm, the computer could conceivably generate an opinion column on another subject that is entirely “automated,” but also entirely the product of human judgment. Science fiction aside, would anyone doubt that an automated op-ed should receive the same First Amendment protection as a human-generated op-ed?

Of course, things get even more complicated when we start talking about artificial intelligence, but we don’t even have to go there. The fact is, “automated” speech is a reflection of human judgment, creativity and ingenuity. To summarily banish it from the ambit of the First Amendment would be a radical step. As with all First Amendment questions, it’s exceedingly difficult to draw clear lines—and we shouldn’t try. Automated speech is as complicated as human speech, and the law should reflect that complexity.

Learn More About the Issues on This Page