Meta’s $1.4B Texas Settlement Highlights Facial Recognition Risks

0
Meta’s .4B Texas Settlement Highlights Facial Recognition Risks

Meta Platforms agreed to dole out $1.4 billion to settle a Texas lawsuit that alleged the company used facial-recognition technology illegally to collect biometric data on millions of state residents without their consent.

The state of Texas, using law firm Keller Postman, filed the suit in 2022 under the state’s 2009 biometric privacy law, which allows damages of up to $25,000 per violation. The state alleged that Meta “had, for over a decade, built an artificial intelligence empire on the backs of Texans and millions of other Americans — deceiving them while capturing their most intimate data, thereby putting their well-being, safety, and security at risk.”

Zina Bash, a senior partner at Keller Postman and one of the lead attorneys on the case, said the case marked an important step in data privacy enforcement under the Texas law. “This was an unprecedented case in many ways: It was the first time the state of Texas sought to enforce its biometric-privacy law since enactment, requiring our team to develop novel litigation approaches and analyze important questions of first impression,” she said in a statement.

Facial recognition has become a hot-button topic in technology and as artificial intelligence makes gains in popularity and widespread use. Last year, the US Federal Trade Commission (FTC) banned Rite Aid’s use of facial recognition technology to identify potential shoplifters. The technology was deployed without customers’ knowledge in several stores in major US cities.

Related:What the Fawkes: Facial Recognition, Digital Masking, and AI

The Texas case also highlighted concerns about facial recognition use without knowledge or consent. Texas Attorney General Ken Paxton said in a statement that the settlement “demonstrates our commitment to standing up to the world’s biggest technology companies and holding them accountable for breaking the law and violating Texans’ privacy rights. Any entity abusing Texans’ sensitive data will be met with the full force of the law.”

John Dwyer, director of security intelligence firm Binary Defense, says biometrics will be at the forefront of many regulatory discussions. “It’s a newer data type that is pretty difficult to control because it’s all managed and processed by artificial intelligence,” he says in a phone interview with InformationWeek. “It adds a level of complication and it’s a paradigm shift from traditional databases.”

Given the dollar amount and prominence of Meta, the Texas case settlement’s influence will likely extend across state lines. “I wouldn’t be surprised to see more state legislation for biometric data,” Dwyer says. “It’s a cool feature and can make lives simpler — the question is at what cost? And what kind of controls can we put in place?”

Related:Face Facts: Are Biometrics Key to Deleting Passwords for Good?

Federal legislation on data privacy is currently stalled in Congress. While the American Data Privacy and Protection Act (ADPPA) passed a House of Representatives commerce committee vote, it has yet to be taken up by the full House and would then need Senate approval. Americans instead must rely on a hodgepodge of data privacy laws in 10 different states.

“I think the US government is certainly going to get more involved in AI and what it will mean for us as a people and how we develop the right kind of policies around this technology,” Dwyer says. “We will see more questions being asked at the federal level to answer what we are going to do as a country with regulation going forward.”

Deeper Meaning for Artificial Intelligence Ethics

For Manoj Saxena, founder of the Responsible AI Institute (RAI Institute) and InformationWeek Insight Circle member, the ruling will strengthen responsible AI use. “This Meta case in Texas is just the beginning,” he says. “While it may initially deter companies, particularly startups, from exploring facial recognition technologies due to associated risks, the overall impact is undeniably positive … This settlement not only validates (RAI Institute’s) mission, but also emphasizes the urgency for all organizations involved in AI to adopt responsible practices, and to prioritize transparency, consent, and data protection.”

Related:FTC Prescribes Ban on Rite Aid’s AI Facial Recognition Use

Saxena also believes the case will result in stricter regulations for biometric data use and artificial intelligence. He also believes such settlements will result in better internal safety controls as companies take on more AI-related technologies.

Jason Glassberg, co-founder of Casaba Security, which specializes in new and emerging AI threats, says the case underscores the need for consent. “This settlement sends a pretty strong message,” he says in an email interview. “Biometric data is highly sensitive and unique. Unauthorized collection and use of such data can lead to serious misuse. This case underscores the necessity of obtaining explicit consent to users before collecting their biometric data.”

Glassberg says Meta’s willingness to acknowledge and settle the issues will work in their favor as far as trust goes. “This should serve as a significant precent for technology companies. Companies will need to be more transparent about their collection practices and build privacy protections into their products.”


link

Leave a Reply

Your email address will not be published. Required fields are marked *