FTC issues warning on misuse of biometric info amid rise of generative AI
AI, machine learning helping to facilitate misuse of consumers' biometric data: FTC
The Federal Trade Commission (FTC) has issued a warning on the potential for consumers’ biometric information to be misused in connection with emerging technologies like generative artificial intelligence (AI) and machine learning.
A policy statement published by the FTC last week warned that the increasingly pervasive use of consumers’ biometric data, including by technologies powered by machine learning and AI, poses risks to consumers’ privacy and data. Biometric information is data that depicts or describes physical, biological or behavioral traits and characteristics, including measurements of an identified person’s body or a person's voiceprint.
"In recent years, biometric surveillance has grown more sophisticated and pervasive, posing new threats to privacy and civil rights," said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. "Today’s policy statement makes clear that companies must comply with the law regardless of the technology they are using."
AI ‘VOICE CLONE’ SCAMS INCREASINGLY HITTING ELDERLY AMERICANS, SENATORS WARN
Several areas in which consumers’ biometric data could be misused to violate their privacy and civil rights were listed in the FTC announcement:
"For example, using biometric information technologies to identify consumers in certain locations could reveal sensitive personal information about them such as whether they accessed particular types of healthcare, attended religious services, or attended political or union meetings. Large databases of biometric information could also be attractive targets for malicious actors who could misuse such information. Additionally, some technologies using biometric information, such as facial recognition technology, may have higher rates of error for certain populations than for others."
WHAT ARE THE DANGERS OF AI? FIND OUT WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE
The FTC policy statement also featured examples of ways in which businesses’ use of biometric information could violate the FTC Act, including:
- Failing to assess foreseeable harms to consumers before collecting biometric data;
- Not taking action to reduce or eliminate those risks;
- Surreptitious or unexpected collection or use of consumers’ biometric data;
- Failing to evaluate practices and capabilities of third parties who will access biometric data;
- Failing to appropriately train employees and contractors who interact with biometric data; and
- Failing to monitor technologies linked to biometric data to ensure they’re functioning properly and unlikely to harm consumers.
The FTC’s policy statement noted that advancements in machine learning have contributed in part to facial recognition technologies becoming 20 times better at finding a matching photograph from a database from 2014 to 2018, according to National Institute of Standards and Technology research.
It explained, "Such improvements are due in significant part to advancements in machine learning, along with data collection, storage, and processing capabilities sufficient to support the use of these technologies."
SENATOR KICKS OFF AI HEARING WITH DEEPFAKED OPENING STATEMENT
In particular, so-called "deepfakes" use biometric data to produce counterfeit images and audio that look realistic to an unwitting viewer or listener. The FTC warned that deepfakes could "allow bad actors to convincingly impersonate individuals in order to commit fraud or to defame or harass the individuals depicted."
A Senate committee recently held a hearing that showcased from the outset just how advanced generative AI technologies have become in mimicking an individual’s biometric features, including their voice.
Sen. Richard Blumenthal, D-Conn., who chairs the Senate Judiciary Subcommittee on Privacy and Technology, opened the hearing by playing audio of an opening statement that sounded eerily like the senator himself – though the language was drafted by OpenAI’s ChatGPT and the audio generated by voice cloning software.
GET FOX BUSINESS ON THE GO BY CLICKING HERE
"That voice was not mine. The words were not mine," Blumenthal said. The senator went on to explain, "The audio and my playing it may strike you as humorous, but what reverberated in my mind was what if I had asked it and what if it had provided an endorsement of Ukraine surrendering or Vladimir Putin’s leadership? That would have been really frightening."