AI

Google details security measures for Chrome’s agentic features

An increasing number of browsers are experimenting with agent features that take action on your behalf, such as booking tickets or shopping for various items. However, these agent capabilities are also come along security risks that could lead to loss of data or money.

Google has detailed its approach to handling user security in Chrome using observer models and user action consent. The company previewed agent capabilities in Chrome in September and said these features will be rolled out in the coming months.

The company said it uses the help of a few models to control agents’ actions. Google said it has built a User Alignment Critic using Gemini to scrutinize the action items built by the scheduler model for a given task. If the critic model thinks that the planned tasks do not serve the user’s purpose, it asks the planner model to reconsider the strategy. Google noted that the critic model only sees the metadata of the proposed action and not the actual web content.

A few screenshots showing what Chrome's agent model can see from a website.
Image credits: Google

Additionally, to prevent agents from accessing unauthorized or untrusted sites, Google uses Agent Origin Sets, which restrict the model to access read-only origins and readable origins. Read-only origins are data that Gemini is allowed to use content from. For example, on a retail site, the listings are relevant to the task, but banner ads are not. Similarly, Google said the agent should only click or type on certain iframes of a page.

“This delimitation enforces that only data from a limited set of sources is available to the agent, and this data can only be passed to the writable origin. This limits the threat vector of data leakage across multiple sources. This also gives the browser the ability to enforce some of that separation, for example by not even sending data to the model that is outside the readable set,” the company said in a blog post.

See also  Google scrubs mentions of 'diversity' and 'equity' from responsible AI team webpage

Google also monitors page navigation by examining URLs through a different observer model. This can prevent navigation to malicious model-generated URLs, the company said.

A screenshot showing Chrome's agent model asking the user for permission before paying for an item while shopping.
Image credits: Google

The search giant has announced that it will also transfer control to users for sensitive tasks. For example, when an agent tries to navigate to a sensitive site with information like banking information or your medical information, it asks the user first. For sites that require login, the user will be asked for permission to let Chrome use the password manager. Google said the agent model has no exposure to password data. The company added that it will ask users before taking actions such as making a purchase or sending a message.

WAN event

San Francisco
|
October 13-15, 2026

Google said it also has a prompt-injection classifier to prevent unwanted actions and is also testing agentic capabilities against attacks created by researchers.

AI browser makers are also paying attention to security. Perplexity was published earlier this month a new open-source content discovery model to prevent rapid injection attacks against officers.

Source link

Back to top button