Meta to Utilize AI in its Product Risk Assessments

Meta to Utilize AI in its Product Risk Assessments image

Image courtesy of noyb.eu

META–0.80%

An AI-driven system may soon take over the evaluation of potential harms and privacy concerns for up to 90% of updates made to Meta platforms such as Instagram and WhatsApp, based on internal documents reportedly obtained by NPR.

According to NPR, a 2012 settlement between the Federal Trade Commission and Facebook (now Meta) mandates that the company perform privacy assessments for its products, analyzing the risks associated with any changes. These evaluations have, until now, been primarily carried out by human reviewers.

With the new system, Meta reportedly stated that product teams will complete a questionnaire detailing their work. In most cases, they’ll then receive an “instant decision” highlighting AI-detected risks and outlining the criteria that updates or new features must meet before release.

This AI-led method is expected to help Meta accelerate its product update process. However, one former executive told NPR that it also introduces “higher risks,” as “negative externalities of product changes are less likely to be prevented before they start causing problems in the world.”

A Meta spokesperson, in a statement, said the company has “invested over $8 billion in our privacy program” and remains focused on “deliver[ing] innovative products for people while meeting regulatory obligations.”

“As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience,” the spokesperson said. “We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues.”

According to internal company documents obtained by NPR, up to 90% of all risk assessments will soon be automated.

In practice, this means that key changes—such as major algorithm updates, new safety tools, and adjustments to content-sharing policies across Meta’s platforms—will largely be approved by an AI-powered system, rather than undergoing review by staff responsible for assessing potential unintended consequences or misuse.

Related Posts