Google calls for weakened copyright and export rules in AI policy proposal

Google, following the heels of OpenAi, A policy proposal published In response to the call from the Trump government for a national ‘AI action plan’. The Tech Gigant has approved weak copyright restrictions on AI training, as well as “balanced” export controls that “protect national security, while American exports and worldwide business activities are possible.”
“The US must pursue an active international economic policy to argue for American values and to support international AI innovation,” Google wrote in the document. “For too long, AI policy formation has paid disproportionate attention to the risks, often ignoring the costs that misguided regulations can have on innovation, national competitiveness and scientific leadership – a dynamic that starts to shift under the new administration.”
One of Google’s more controversial recommendations relates to the use of IP-protected material.
Google argues that “exceptions of reasonable use and text and data mines” are crucial for AI development and AI-related scientific innovation. Just like OpenAi, the company wants to codify the right for IT and Rivals to train on publicly available data – including copyright data – largely without restriction.
“These exceptions ensure the use of copyrighted, publicly available material for AI training without significantly influencing the right -hand -valve,” wrote Google, “and often avoid very unpredictable, unbalanced and long -term negotiations with data holders during model development or scientific experiments.”
Google, that has Reportedly trained a Number of models on public, copyrighted information, is Vecht lawsuits With data owners who accuse the company of not informing and compensating for them before they do this. American courts still have to decide whether Fair Use Doctrine AI developers effectively protects against IP -rights cases.
In its AI policy proposal, Google also has a problem with certain export controls imposed under the BIDEN administration, which states that “the goals of economic competitiveness can” impose “disproportionate charges on American cloud service providers.” This is in contrast with statements from Google competitors such as Microsoft, who in January said it was “confident” It could “fully comply” with the rules.
It is important that the export rules, who try to limit the availability of advanced AI chips in non -destroyed countries, cure for exemptions for trusted companies that are looking for large clusters of chips.
Elsewhere in his proposal, Google calls for “long -term, persistent” investments in fundamental domestic R&D, which drives back against recent federal efforts Reduce expenditure and eliminate subsidy prices. The company said that the government should release data sets that can be useful for commercial AI training, and assign financing to “Early Market R&D”, while computer and models are “widely available” for scientists and institutions.
Google points to the chaotic regulating environment created by the patchwork of the US AI laws of the US and urged the government to adopt the federal legislation on AI, including an extensive privacy and security framework. Just over two months in 2025, The number of current AI accounts in the US has grown to 781According to an online tracking tool.
Google warns the US government against imposing what it regards as serious obligations around AI systems, such as obligations for user liability. In many cases, Google argues, the developer of a model has “little to no visibility or control” about how a model is used and therefore cannot bear responsibility for abuse.
Historically, Google has opposed laws such as the SB 1047 reports from California, which clearly laid out What precautions would be that an AI developer should take before he releases a model and in which cases developers can be held liable for damage caused by model.
“Even in cases where a developer directly offers a model to implementation, implementers will often be best placed to understand the risks of downstream use, effective risk management and to carry out post-market monitoring and log registration,” wrote Google.
In its proposal, Google is also called disclosure requirements, as the one is’ exaggerated ‘by the EU, and said that the US government must oppose transparency rules that must be announced’ commercial security securities, to duplicate products or to jeopardize the route of a route card for a route card.
A growing number of countries and states have adopted laws that require AI developers to reveal more about how their systems work. The AB 2013 of California is that companies that develop AI systems publish a summary at a high level of the data sets they used to train their systems. In the EU, to comply with the AI Act As soon as it enters into force, companies will have to implement model implementers with detailed instructions on the operation, limitations and risks associated with the model.