Meta plans to automate many of its product risk assessments

An AI-driven system can soon take responsibility for evaluating potential damage and privacy risks of up to 90% of the updates made for meta apps such as Instagram and WhatsApp, according to internal documents Allegedly viewed by NPR.
NPR says An agreement of 2012 Between Facebook (now Meta) and the Federal Trade Commission requires that the company carries out privacy assessments of its products and evaluates the risks of possible updates. So far, those reviews have been largely performed by human evaluators.
According to the new system, Meta is said to be said that product teams are asked to complete a questionnaire about their work, and then usually receives an “immediate decision” with AI-Geidified risks, along with requirements that an update or function must meet before it is launched.
This AI-centric approach would enable Meta to update its products faster, but a former director told NPR that it also creates ‘higher risks’, because ‘negative external effects of product changes are less likely before they start causing problems in the world.’
In a statement, a Meta spokesperson said that the company “has invested more than $ 8 billion in our privacy program” and is committed to delivering “innovative products for people and at the same time paying legal obligations.”
“As the risks evolve and our program ripens, we improve our processes to better identify risks, streamline decision -making and improve people’s experience,” the spokesperson said. “We use technology to add consistency and predictability to decisions with a low risk and to rely on human expertise for rigorous assessments and supervision of new or complex issues.”
This message has been updated with extra quotations from the Meta statement.