Recounting her experiences working with Barak Obama as a candidate and as president, Alyssa Mastromonaco says he would often challenge his staff with the question, “Uh, who thought this was a good idea?” It was an attempt to ensure his advisers took personal responsibility for the recommendations they made, especially when things went wrong.
It’s about time someone asked that question about facial recognition software. It would oblige the developers and users of the technology to explain exactly why they think it’s a good idea to create something with that level of power.
Asking that question of facial recognition software is one way of participating in what legal scholar Frank Pasquale calls the “second wave of algorithmic accountability.” In the first wave, computer scientists working on applied artificial intelligence (AI) algorithms asked how the tools could be made more accurate and less biased. In the current second wave, advocates and critics are asking developers and users why they are using the technology at all and whether the payoffs are really worth it.
“It would oblige the developers and users of [facial recognition] technology to explain exactly why they think it’s a good idea to create something with that level of power.”
Major manufacturers of facial recognition software, including Microsoft, Amazon, and IBM, have responded to this second-wave thinking by pausing or abandoning their distribution of facial recognition technology to law enforcement. What is good for government use of the technology is good for the private sector as well. Asking how facial recognition software will be used commercially strikes me as the right way forward.
Despite its rapid evolution, a number of things stand in the way of this values-based idea of technology management. One of those obstacles is consent. For the commercial use of facial recognition technology data, subject consent seems to be the main mechanism adopted to manage the new technology. Another obstacle is private right of action. Combining consent with a private right of action creates conflicts to obtain the former. Previous legal battles suggest a private right of action would likely lead to expensive legal disputes—which is worth examining as an almost paradigm case of technology mismanagement.
THE ILLINOIS BIOMETRIC PRIVACY IDENTIFICATION ACT
The paradigm example is the Biometric Privacy Identification Act (BPIA), a 2008 Illinois state law. It is a sweeping law for all private parties (but not state or local governments) that collect or use biometric information, including facial images, fingerprints, and retinal scans. BIPA requires notice and affirmative written consent for the collection and use of this information. It provides for a private right of action and damages of up to $5,000 per violation. With millions of users involved in some cases, the law exposes technology companies to fines in the billions of dollars if plaintiffs can establish that they were injured.
Recently, three cases against Amazon, Google, and Microsoft were filed in Washington and California, which makes them subject to the Ninth Circuit’s generous interpretation of standing. The cases allege that the companies violated the requirement to obtain affirmative consent when they used images from IBM to train their facial recognition software. IBM had initially gathered the photos from Flickr and painstakingly labeled them to enable facial recognition developers to improve the fairness and accuracy of their programs. IBM says it provides opportunities to “opt-out” for those who don’t want their photos used for this purpose. Although, that is not what the statute demands—it requires affirmative “opt-in” consent. There is also no indication that Microsoft, Amazon, or Google have taken steps to notify all data subjects and obtain their consent before collecting images from IBM and using them to train their facial recognition software.
THE FACEBOOK PRECEDENT
Legal precedent involving Facebook suggests that Microsoft, Amazon, and Google should be worried. In February of this year, Facebook settled a similar BIPA lawsuit for $550 million, one of the largest settlements in history, after trying and failing to acquire a review of the Ninth Circuit decision.
Facebook had provided an opt-out option for users of its facial recognition software, but the statute specifies “opt-in.” After the suit began in 2016, it changed to an opt-in option but still faced potentially millions of violations for use of its software prior to making the change. Legal exposure easily ran into the billions of dollars.
Facebook argued that even though there might have been a technical violation of the statute, this was a case of no harm, no foul. The company had a colorful argument, too. Under the 2016 Supreme Court Spokeo decision, a statutory violation is not sufficient for a lawsuit—a plaintiff has to show concrete injury as well. In this case, Facebook argued there had been no concrete injury in the difference between the opt-out choice it offered its users and the opt-in consent required by the statute.
In the August 2019 Patel v. Facebook decision, the Ninth Circuit Court of Appeals said a statutory violation may cause a concrete injury if (1) the statutory provisions were established to protect the plaintiff’s concrete interests, and (2) the violations alleged actually harm, or present a material risk of harm, to such interests. An intangible harm, like the violation of a statutory privacy right, could therefore be grounds for a lawsuit. As the Ninth Circuit said, “Using facial-recognition technology without consent (as alleged in this case) invades an individual’s private affairs and concrete interests.”
Facebook wasn’t done. There was a split in the circuits. The Second Circuit interpreted Spokeo differently, requiring tangible harm as a concrete injury. Facebook tried to get the Supreme Court to take its case to resolve the split, but in January of this year, the Supreme Court rejected the appeal without comment. In February, Facebook settled.
Commentators predicted that this would open the floodgates to copycat class-action suits. Sure enough, the new cases against Microsoft, Amazon, and Google appeared this month.
THE WAY FORWARD
To the average person, this back and forth about legal standing is bewildering. It is hard to believe that the fate of a promising new technology should depend on the way judges might parse the meaning of “concrete injury.” But the bottom line is legal trouble for companies and less than fully rational policy for the country. We must change direction and find a new way forward to manage the development and introduction of facial recognition.
The time is ripe for several steps toward a national strategy for facial recognition. One step would be a nationwide, pre-emptive facial recognition law. Commercial uses of facial technology infringe on interstate commerce and should be regulated at the national level. Conflicting state laws should be pre-empted, creating a single national policy for all. This would include BIPA in so far as its requirements are inconsistent with the new policy set out in the national statute.
The second step would be to ensure that a federal agency would be responsible for implementation and enforcement, rather than through private rights of action with potentially unlimited liability for commercial users of facial recognition technology. Sensible management of a new technology is not possible when it can be disrupted by the oddities of court interpretations, the uncertainties of class certifications, and a myriad of other legal minefields.
One further step is to question whether data subject consent is the right way to protect people from the potential harms associated with applications of facial recognition software. Of course, no one should be forced to buy a new technology if they don’t want it. But should we really have to ask every individual whose information might be used to train a new algorithm if they want it used for that purpose? Training facial recognition programs to be accurate and free of bias seems to be a public good. Why should it be frustrated by assigning even opt-out rights to data subjects? What have we gained as a nation if Microsoft, Google, and Amazon continue to use biased and inaccurate facial recognition software? Should they face billions of dollars in lawsuit fines for seeking to perfect their products?
The bill recently introduced by Sens. Bernie Sanders and Jeff Merkley unfortunately would mimic many of the problematic features of the Illinois BIPA law, including affirmative opt-in consent for collection and use of biometric information, enforcement by private right of action, and damages per violation up to $5,000. And it does not pre-empt inconsistent state laws.
“Even if a facial recognition program can identify major demographics in an unbiased, accurate, and efficient way, we have to ask ourselves if that is a goal we want to pursue.”
Of course, we can argue about whether the use of a facial recognition program in a particular case makes sense. A new law should set up a process for such values-based evaluation. Even if a facial recognition program can identify major demographics in an unbiased, accurate, and efficient way, we have to ask ourselves if that is a goal we want to pursue. These cases of improper or dangerous usage of facial recognition technology will not be rare corner cases. As applications of facial recognition technology increasingly emerge, it will become clear that central uses of the technology will be questionable because of their underlying purposes.
This is what Pasquale is arguing for as part of the second wave of algorithmic accountability—to ask not just whether the technology is fair and accurate, but whether it should be used in a particular case at all.
We recognize the dependence of technology in the international arena, where the U.S. and its allies seek to promote “emerging technology that advances liberal democratic values,” in the words of Tarun Chhabra of Georgetown’s Center for Security and Emerging Technology. The second wave of algorithmic accountability is a domestic version of this values-based vision of technology management. It is time to apply it to the specific case of facial recognition technology.
If there’s no good way to establish in national law a mechanism for evaluating the uses of facial recognition technology, then we are left with the legal tools we have. Many might believe that the commercial uses of facial recognition under current standards are so dangerous to the public that throwing sand in the gears with billion-dollar class-action lawsuits is the best of the bad options available. Given the track record of the last 40 years of regulatory indifference to the consequences of technology applications—and the resulting troubles that surround us in the form of harmful spill-over effects of internet-related technologies—it is hard to argue with the pessimistic assessment that slowing facial recognition technology down in this way might be a fully rational way to proceed if there are no other alternatives.
We have to recognize that this is a miserable way to manage technology and we can do better. When things go seriously wrong with applications of facial recognition technology, or when its introduction is unreasonably delayed by irrational lawsuits—as will inevitably happen if there is no sensible values-based plan to manage its introduction—no one will want to answer the Obama question: “Uh, who thought this was a good idea?”