AI

Sam Altman: OpenAI has been on the ‘wrong side of history’ concerning open source

To end a day of product releases, OpenAI researchers, engineers and managers, including OpenAi CEO Sam Altman, answered questions in a Convenient Reddit Ama On Friday.

OpenAi is in a bit of a precarious position. It fights the perception that it is in the AI ​​race to Chinese companies such as Deepseek, which OpenAi claims to have stolen his IP. The chatgpt maker tried it The relationship between his relationship with Washington And at the same time pursue an ambitious data center project, while allegedly laying groundwork for one of the largest financing rounds in history.

Altman admitted that Deepseek has reduced the lead of OpenAi in AI, and he said he believes that OpenAi has been “on the wrong side of history” when it comes to open sourcing his technologies. Although OpenAi has open -soured models in the past, the company has generally preferred its own closed, closed source development approach.

‘[I personally think we need to] Find another open source strategy, “said Altman. “Not everyone at OpenAI shares this image, and it is also not our current highest priority … We will produce better models [going forward]But we will keep less a lead than in previous years. “

In a follow-up answer, Kevin Weil, the main product officer of OpenAi, is considering opening older models that are no longer state-of-the-art. “We will definitely think about doing more of this,” he said, without a more detail.

Apart from OpenAi to reconsider release filosophy, Altman said that Deepseek forced the company to possibly reveal more about how the so-called reasoning models, such as the O3-Mini model released today, show their ‘thinking process’. Currently, the OpenAI models hide their reasoning, a strategy that is intended to prevent competitors from scraping training data for their own models. The reasoning model of Deepseek, R1,, on the other hand, shows its full thought.

See also  Donald Trump's campaign was hacked by a 'hostile foreign source'

“We are working on to show some more than we show today – [showing the model thought process] Will be very fast, “Weil added. “TBD about everyone – showing all line of thought leads to competitive distillation, but we also know that people (at least powerful users) want it, so we will find the right way to balance it.”

Altman and Weil tried to dispel rumors that chatgpt, the chatbot platform so that OpenAi launches many of its models, would rise in price in the future. Altman said he would like to make chatgpt ‘cheaper’ in the course of time if feasible.

Altman said earlier that OpenAi lost money to its most expensive chatgpt plan, Chatgpt Pro, which costs $ 200 a month.

In a somewhat related thread, Weil said that OpenAi continues to see evidence that more computing power leads to “better” and more high -performance models. That is to a large extent what projects need, such as Stargate, OpenAi’s recently announced mass data center project, Weil said. Serving a growing user base is also the feeding of the calculation question within OpenAI, he continued.

Asked for recursive self -improvement that may be possible by these powerful models, Altman said he thinks a “fast start” is more likely than he ever believed. Recursive self-improvement is a process in which an AI system could improve its own intelligence and possibilities without human input.

Of course it is worth mentioning that Altman is notorious for overpromatization. It wasn’t long since he was Lowered OpenAi’s bar for Agi.

A Reddit user asked whether OpenAi’s models, self-improving or not, would be used to develop destructive weapons-in particular nuclear weapons. This week, OpenAi announced a collaboration with the US government to give its models to the American national laboratories, partly for research into nuclear defense.

See also  Sam Asghari wears matching outfits with rumored girlfriend while furniture shopping

Weil said he trusted the government.

“I got to know these scientists and they are AI experts alongside researchers of world class,” he said. “They understand the power and limits of the models, and I don’t think there is a chance that they will just chase some model output in a nuclear calculation. They are smart and evidence-based and they do a lot of experimentation and work data to validate all their work. “

The OpenAI team was asked for various questions of a more technical nature, such as when the next reasoning model of OpenAi, O3, will be released (“more than a few weeks, less than a few months,” Altman said); If the next “non-re-hall” model of the company, GPT-5, could land (“don’t have a timeline yet,” said Altman); And when OpenAi can reveal a Dall-E 3 successor, the company’s image-generating model. Dall-E 3, which was released about two years ago, has been in grazing for quite a long time. Image-General Tech has improved with Levs and Bounds since the debut of Dall-E 3, and the model is No longer competing on a number of benchmarkts.

“Yes! We are working on it,” Weil said about a follow-up of Dall-E 3. “And I think it will be worth the wait.”

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button