Federal Trade Commission Staff Report Recommends Best Practices for Companies That Use Facial Recognition Technologies
To prevent business practices that are anticompetitive or deceptive or unfair to consumers; to enhance informed consumer choice and public understanding of the competitive process; and to accomplish this without unduly burdening legitimate business activity.
In December of last year, the Federal Trade Commission (FTC) hosted a workshop – “Face Facts: A Forum on Facial Recognition Technology” to examine the use of facial recognition technology and related privacy and security concerns.
Monday, the FTC released two documents summing up the effort. The first is the Staff Report, a 21 page attempt to synthesize the views of the forum's participants and FTC staff into an authoritative guide. The second is a dissent from the 4-1 vote in favor of releasing the staff report.
In my opinion, Best Practices for Common Uses of Facial Recognition Technologies falls a little short for a couple of reasons. First, of the staff report's three cases, only one — the Facebook case — is actually a facial recognition application. Then in the other instances where the report deals with facial recognition proper, it does so in a wholly hypothetical way. This approach runs the risk of being seen by many as falling outside the ambit of the FTC's mission.
I have selected passages from both documents mentioned above for examination because they lie at the heart of the whole exercise. They are a distillation of what the entire project was about and has concluded. The entire documents are available via links below for those who seek more information.
from the Staff report... (pdf at FTC.gov)
To begin, staff recommends that companies using facial recognition technologies design their services with privacy in mind, that is, by implementing “privacy by design,” in a number of ways. First, companies should maintain reasonable data security protections for consumers’ images and the biometric information collected from those images to enable facial recognition (for example, unique measurements such as size of features or distance between the eyes or the ears). As the increasing public availability of identified images online has been a major factor in the increasing commercial viability of facial recognition technologies, companies that store such images should consider putting protections in place that would prevent unauthorized scraping which can lead to unintended secondary uses. Second, companies should establish and maintain appropriate retention and disposal practices for the consumer images and biometric data that they collect. For example, if a consumer creates an account on a website that allows her to virtually “try on” eyeglasses, uploads photos to that website, and then later deletes her account on the website, the photos are no longer necessary and should be discarded. Third, companies should consider the sensitivity of information when developing their facial recognition products and services. For instance, companies developing digital signs equipped with cameras using facial recognition technologies should consider carefully where to place such signs and avoid placing them in sensitive areas, such as bathrooms, locker rooms, health care facilities, or places where children congregate.
Staff also recommends several ways for companies using facial recognition technologies to provide consumers with simplified choices and increase the transparency of their practices. For example, companies using digital signs capable of demographic detection – which often look no different than digital signs that do not contain cameras – should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs. Similarly, social networks using a facial recognition feature should provide users with a clear notice – outside of a privacy policy – about how the feature works, what data it collects, and how it will use the data. Social networks should also provide consumers with (1) an easy to find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected from their tagged photos. Finally, there are at least two scenarios in which companies should obtain consumers’ affirmative express consent before collecting or using biometric data from facial images. First, they should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data derived from that image in a materially different manner than they represented when they collected the data. Second, companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent. Consider the example of a mobile app that allows users to identify strangers in public places, such as on the street or in a bar. If such an app were to exist, a stranger could surreptitiously use the camera on his mobile phone to take a photo of an individual who is walking to work or meeting a friend for a drink and learn that individual’s identity – and possibly more information, such as her address – without the individual even being aware that her photo was taken. Given the significant privacy and safety risks that such an app would raise, only consumers who have affirmatively chosen to participate in such a system should be identified. The recommended best practices contained in this report are intended to provide guidance to commercial entities that are using or plan to use facial recognition technologies in their products and services. However, to the extent the recommended best practices go beyond existing legal requirements, they are not intended to serve as a template for law enforcement actions or regulations under laws currently enforced by the FTC. If companies consider the issues of privacy by design, meaningful choice, and transparency at this early stage, it will help ensure that this industry develops in a way that encourages companies to offer innovative new benefits to consumers and respect their privacy interests. [ed.: bold emphasis mine]
The fist paragraph above is common sense. For example: "Companies should establish and maintain appropriate retention and disposal practices for the consumer images and biometric data that they collect." Who could argue with that?
I believe many on all sides of the facial recognition issue will find the Face Facts forum findings disappointing and I think the second italicized paragraph above best encapsulates why. In it, the FTC staff report loses coherence.
Let's examine it in detail.
1. The staff report doesn't confine itself to facial recognition proper.
Staff also recommends several ways for companies using facial recognition technologies to provide consumers with simplified choices and increase the transparency of their practices. For example, companies using digital signs capable of demographic detection – which often look no different than digital signs that do not contain cameras – should provide clear notice to consumers that the technologies are in use, before consumers come into contact with the signs.
Demographic inference isn't facial recognition and nowhere does the FTC staff make a case that a computer guessing at gender, age or ethnicity has any privacy implication, at all. And then, even if that case is made, the task of tying the activity back to the FTC's mandate remains.
¿QuĂ©? |
2. Next there's a nameless "social network" — no points for guessing [See: Consumer Reports: Facebook & Your Privacy and It's not the tech, it's the people: Senate Face Rec Hearings Edition] which — that is hypothetically doing the exact same things a non-hypothetical social network actually did without much in the way of an FTC response.
Similarly, social networks using a facial recognition feature should provide users with a clear notice – outside of a privacy policy – about how the feature works, what data it collects, and how it will use the data. Social networks should also provide consumers with (1) an easy to find, meaningful choice not to have their biometric data collected and used for facial recognition; and (2) the ability to turn off the feature at any time and delete any biometric data previously collected from their tagged photos.
This is the closest the document ever gets to a concrete example of facial recognition technology even being in the neighborhood of an act the FTC exists to regulate and the staff of the FTC still doesn't abandon the hypothetical for the real world.
3. Then there's the warning that the FTC would take a dim view of two types of hypothetical facial recognition deployment each of which would require its own dedicated staff report in order to make a decent show of doing the topic justice.
Finally, there are at least two scenarios in which companies should obtain consumers’ affirmative express consent before collecting or using biometric data from facial images. First, they should obtain a consumer’s affirmative express consent before using a consumer’s image or any biometric data derived from that image in a materially different manner than they represented when they collected the data.
This is far too general to be useful. The above would seem to preclude casinos from using facial databases of known or suspected cheaters, a proposition few would argue.
Then there's the question of what makes biometric data so special? Should the same standards apply to all personal data or just pictures of faces?
For the situation above to apply to the FTC's mandate a practice would have to be deemed "deceptive" or "unfair" and if a practice is deceptive or unfair when a face is part of the data being shared, how does using the data in a substantially equal manner cease to be deceptive and unfair by omitting the face? The report is silent on these points.
Second, companies should not use facial recognition to identify anonymous images of a consumer to someone who could not otherwise identify him or her, without obtaining the consumer’s affirmative express consent. Consider the example of a mobile app that allows users to identify strangers in public places, such as on the street or in a bar. If such an app were to exist, a stranger could surreptitiously use the camera on his mobile phone to take a photo of an individual who is walking to work or meeting a friend for a drink and learn that individual’s identity – and possibly more information, such as her address – without the individual even being aware that her photo was taken. Given the significant privacy and safety risks that such an app would raise, only consumers who have affirmatively chosen to participate in such a system should be identified.
This hypothetical future app does exactly what anyone can pay a private detective to do legally and today. If the FTC isn't taking action against PI's, it would be extremely helpful of the FTC to make clear to buyers and sellers of facial recognition technology the distinctions they see between the two.
Then, towards the end of the excerpted text, perhaps sensing how far ahead of themselves and the mission of the FTC they have gotten, a couple of sentences later (bold sentence) the staff report essentially says, "Never mind. We aren't formulating new policy here. We're just freestylin."
However, to the extent the recommended best practices go beyond existing legal requirements, they are not intended to serve as a template for law enforcement actions or regulations under laws currently enforced by the FTC. If companies consider the issues of privacy by design, meaningful choice, and transparency at this early stage, it will help ensure that this industry develops in a way that encourages companies to offer innovative new benefits to consumers and respect their privacy interests. [ed.: bold emphasis mine]
With the possible exception of the "social network" example, pretty much everything in the document goes beyond existing legal requirements enforced by the FTC. So what's going on here?
My hunch is that someone at the FTC became concerned over a "social network" terms of service issue and rather than deal with it as a narrow terms of use issue — an issue seemingly right in the wheelhouse of the FTC's mission under the "deceptive or unfair" part of their mission — decided instead that it was a technology issue and that it was both possible and desirable to address the far bigger issues of facial recognition technology, ID and society in a coherent way, forgetting that doing so requires a novel interpretation of the FTC's mission. Once that decision was made, the best practices document, flawed though it is, was about the best that could be hoped for... which brings us to the dissent.
The decision to release the Face Facts staff report wasn't unanimous. Commissioner Thomas Rosch thought releasing the report at all was a mistake. Several paragraphs of the dissent follow below.
The last paragraph quoted below is particularly convincing.
then the lone dissent... (pdf at FTC.gov)
The Staff Report on Facial Recognition Technology does not – at least to my satisfaction – provide a description of such “substantial injury.” Although the Commission’s Policy Statement on Unfairness states that “safety risks” may support a finding of unfairness,3 there is nothing in the Staff Report that indicates that facial recognition technology is so advanced as to cause safety risks that amount to tangible injury. To the extent that Staff identifies misuses of facial recognition technology, the consumer protection “deception” prong of Section 5 – which embraces both misrepresentations and deceptive omissions – will be a more than adequate basis upon which to bring law enforcement actions.To summarize, Rosch points out that the FTC staff report:
Second, along similar lines, I disagree with the adoption of “best practices” on the ground that facial recognition may be misused. There is nothing to establish that this misconduct has occurred or even that it is likely to occur in the near future. It is at least premature for anyone, much less the Commission, to suggest to businesses that they should adopt as “best practices” safeguards that may be costly and inefficient against misconduct that may never occur.
Third, I disagree with the notion that companies should be required to “provide consumers with choices” whenever facial recognition is used and is “not consistent with the context of a transaction or a consumer’s relationship with a business.”4 As I noted when the Commission used the same ill-defined language in its March 2012 Privacy Report, that would import an “opt-in” requirement in a broad swath of contexts.5 In addition, as I have also pointed out before, it is difficult, if not impossible, to reliably determine “consumers’ expectations” in any particular circumstance.
In summary, I do not believe that such far-reaching conclusions and recommendations can be justified at this time. There is no support at all in the Staff Report for them, much less the kind of rigorous cost-benefit analysis that should be conducted before the Commission embraces such recommendations. Nor can they be justified on the ground that technological change will occur so rapidly with respect to facial recognition technology that the Commission cannot adequately keep up with it when, and if, a consumer’s data security is compromised or facial recognition technology is used to build a consumer profile. On the contrary, the Commission has shown that it can and will act promptly to protect consumers when that occurs.
- Exceeds the FTC's regulatory mandate
- Makes no allegation of consumer harm
- Is so overly broad as to be unworkable
- Provides no support for the conclusions it draws
NOTE: This post has been modified slightly from the original version to add clarity, by cleaning up grammar, spelling or typographical errors.