Meta is having trouble with rogue AI agents

An AI agent went on the rampage at Meta, exposing sensitive corporate and user data to employees who were not authorized to access it.
According to an incident report, which was reviewed and reported by The informationa Meta employee posted on an internal forum asking for help with a technical question – which is a standard action. However, another engineer asked an AI agent to help analyze the question, and the agent ended up posting an answer without asking the engineer for permission to share it. Meta confirmed the incident to The Information.
It turns out the AI agent didn’t give good advice. The employee who asked the question ultimately took action based on the agent’s directions, inadvertently making massive amounts of corporate and user-related data available for two hours to technicians who didn’t have access to it.
Meta deemed the incident a “Sev 1,” which is the second-highest level of severity in the company’s internal system for measuring security vulnerabilities.
Rogue AI agents have already caused a problem at Meta. Summer Yue, safety and alignment director at Meta Superintelligence, posted on X last month describing how her OpenClaw agent ended up deleting her entire inbox, even though she said she had to confirm it with her before taking action.
Still, Meta seems optimistic about the potential for agentic AI. Last week, Meta bought Moltbook, a Reddit-like social media site that allows OpenClaw agents to communicate with each other.




