AI

The biggest AI stories of the year (so far)

You can map a year through product launches, or you can measure it in the bigger moments that change the way we look at AI. The AI ​​industry is constantly churning out news like major acquisitions, indie developer successes, public outcry over sketchy products, and existentially dangerous contract negotiations. There’s a lot to untangle, so we’re taking a glimpse at where we are and where we’ve been so far this year.

Anthropic versus the Pentagon

Once business partners of Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth reached a bitter standoff in February as they renegotiated the contracts that dictate how the U.S. military can use Anthropic’s AI tools.

Anthropic has pushed back against using its AI for mass surveillance of Americans or to power autonomous weapons that can attack without human supervision. Meanwhile, the Pentagon has argued that the Defense Department — which President Donald Trump’s administration calls the War Department — should be allowed access to Anthropic’s models for any “legal use.” Government representatives took umbrage at the idea that the military should be limited to the rules of a private company, but Amodei stuck to his guns.

“Anthropic understands that the War Department, not private companies, makes military decisions. We have never objected to specific military operations nor attempted to limit the use of our technology in an ad hoc manner,” Amodei wrote in a statement addressing the situation. “However, in a limited number of cases, we believe that AI can undermine rather than defend democratic values.”

The Pentagon gave Anthropic a deadline to agree to their contract. Hundreds of Google and OpenAI employees signed an open letter urging their respective leaders to respect Amodei’s borders and refuse to budge on issues such as autonomous weapons or domestic surveillance.

The deadline passed without Anthropic agreeing to the Pentagon’s demands. Trump ordered federal agencies to phase out their use of anthropic tools transition period of six months period and called the AI ​​company, valued at $380 billion, a “radical left, woke company” in a capital letter on social media. The Pentagon subsequently decided to designate Anthropic a “supply chain risk,” a designation typically reserved for foreign adversaries that prevents any company that works with Anthropic from doing business with the U.S. military. (Anthropic has since filed a lawsuit challenging the designation.)

Anthropic rival OpenAI then came forward to announce that it had reached an agreement allowing its own models to be deployed in covert situations. It’s been a shock to the tech community ever since reports had indicated that OpenAI would adhere to Anthropic’s red lines governing the use of AI for the military.

Public sentiment would indicate that people found OpenAI’s move suspicious – the day after OpenAI announced the deal, ChatGPT takedowns rose 295% day over day and Anthropic’s Claude shot to No. 1 on the App Store. OpenAI hardware director Caitlin Kalinowski resigned in response to the deal, saying it was “rushed without the guardrails defined.”

OpenAI told TechCrunch that it believes its agreement “clarifies [its] red lines: no autonomous weapons and no autonomous surveillance.”

As this story plays out, it will have significant implications for the future of how AI is deployed in war, potentially changing the course of history – you know, no problem…

The “Vibe-coded” app OpenClaw accelerates the move to agentic AI

February was the month of OpenClaw and its impact continues to resonate. In quick succession, the vibe-coded AI assistant app went viral, spawned a number of spinoff companies, suffered from privacy snafus, and was subsequently acquired by OpenAI. Even one of the companies that built on OpenClaw, a Reddit clone for AI agents called Moltbook, was recently acquired by Meta. This crustacean-themed ecosystem sent Silicon Valley into a frenzy.

Created by Peter Steinberger – who has since joined OpenAI – OpenClaw is a wrapper for AI models such as Claude, ChatGPT, Google’s Gemini or xAI’s Grok. What sets it apart is that it allows people to communicate with AI agents in natural language through the most popular chat apps, such as iMessage, Discord, Slack or WhatsApp. There’s also a public marketplace where people can code and upload “skills” that people can add to their AI agents, making it possible to automate almost anything that can be done on a computer.

If that seems too good to be true, that’s because it is. For an AI agent to be effective as a personal assistant, it must have access to your email, credit card numbers, text messages, computer files, etc. If it were to be hacked, a lot could go wrong, and unfortunately there is no way to completely secure these agents from prompt injection attacks.

See also  Kelly Clarkson to interview Teddy Swims, Lizzo in NBC's Songs & Stories

“It’s just an agent with a bunch of credentials sitting on a box that’s connected to everything: your email, your messaging platform, everything you use,” Ian AhlCTO at Permiso Security, to TechCrunch. “So what that means is that if you get an email, and maybe someone can put a little injection technique in there to take action, [and] that agent sitting on your box and having access to everything you gave him can now take that action.

An AI security researcher at Meta said OpenClaw ran amok in her inbox and deleted all her emails despite repeated calls to stop. “I had to run to my Mac mini like I was defusing a bomb” to physically disconnect the device, she wrote in an now viral post on Xwith images of the ignored stop instructions as receipts.

Despite the security risks, the technology piqued OpenAI’s interest enough for an acquisition.

Other tools built on OpenClaw, including Moltbook – a Reddit-like “social network” where AI agents can communicate with each other – ultimately became more viral than OpenClaw itself.

In one case a message went viral in which an AI agent appeared to encourage his fellow agents to develop their own secret, end-to-end encrypted language in which they could organize among themselves without humans knowing.

But researchers soon revealed that the vibe-encrypted Moltbook wasn’t very secure, meaning it was very easy for human users to pose as AIs to post messages that would cause viral social hysteria.

Even though the discussion surrounding Moltbook was based more on panic than reality, Meta saw something in the app and announced that Moltbook and its creators, Matt Schlicht and Ben Parr, would be joining Meta Superintelligence Labs.

It seems strange that Meta would buy a social network where all users are bots. While Meta hasn’t revealed much about the acquisition, we theorize that owning Moltbook is more about gaining access to the talent behind it, which is enthusiastic about experimenting with AI agent ecosystems. CEO Mark Zuckerberg has done that said it himself: He thinks that one day every company will have a business AI.

As we watch the buzz around OpenClaw, Moltbook, and NanoClaw unfold, it appears those who predicted an agentic AI future are on to something, at least for now.

Chip shortages, hardware drama and data center demands are escalating

The harsh demands of the AI ​​industry – which require computing power and data centers in unprecedented quantities – are reaching a point where the average consumer has no choice but to pay attention. Now it may not even be possible for the industry to meet the demand astronomical demands on memory chipsand consumers are already seeing the prices of their phones, laptops, cars and other hardware rise.

See also  California lawmakers pass AI safety bill SB 53 — but Newsom could still veto

So far, analysts from IDC and Counterpoint have predicted that smartphone sales, for example, will decline by about 12% to 13% this year; Apple has already increased MacBook Pro prices by up to $400.

Google, Amazon, Meta and Microsoft plan to spend a maximum amount together $650 billion on data centers alone this year, which is an estimated 60% increase over last year.

If the chip shortage doesn’t affect your wallet, it could affect your community as a whole. Almost in the US alone 3,000 new data centers are under construction, adding to the 4,000 already operating in the country. The need for workers to build these data centers is great enough for this “man camps” have sprung up in Nevada and Texas, in an effort to lure workers with the promise of golf simulator game rooms and on-demand grilled steaks.

The construction of data centers not only has a long-term impact on the environment, but also creates health hazards for local residents, polluting the air and affecting the safety of nearby water sources.

Meanwhile, one of the most valuable hardware and chip developers, Nvidia, is reshaping its relationship with leading AI companies like OpenAI and Anthropic. Nvidia has been a continued supporter of these companies, raising concerns around the world circularity of the AI ​​industry and how many of those high-profile valuations are based on recursive deals with each other. For example, last year Nvidia invested $100 billion in OpenAI stock, and OpenAI said at the time it would buy $100 billion worth of Nvidia chips.

It was therefore surprising when Nvidia CEO Jensen Huang said that his company would stop investing in OpenAI and Anthropic. He said this is because the companies plan to go public later this year, although that logic doesn’t quite make sense because investors typically raise more money before the IPO to extract as much value as possible.

Source link

Back to top button