AI

OpenAI peels back ChatGPT’s safeguards around image creation

This week, OpenAi launched a new image generator in Chatgpt, who quickly became viral because of his ability to make images in Studio-Ghibli style. In addition to the pastel illustrations upgrades, the possibilities of GPT-4O’s native image generator considerably the possibilities of chatgpt, improving image editing, text display and spatial display.

However, one of the most striking changes that OpenAI has made this week includes the policy of the content of the content, so that Chatgpt can now generate images with public figures, hateful symbols and racial characteristics on request.

OpenAi previously rejected these kinds of instructions for being controversial or harmful. But now the company has “evolved” its approach, according to one Blog post On Thursday published by OpenAi’s model Behavior Lead, Joanne Jang.

“We shift from general refusal in sensitive areas to a more precise approach aimed at preventing damage in practice,” said Jang. “The goal is to embrace humility: acknowledge how much we do not know, and position ourselves to adjust while we learn.”

These adjustments seem to be part of the larger plan of OpenAi to effectively “unknown” chatgpt. In February, OpenAi announced that it is starting to change how the AI ​​models train, with the ultimate goal of having Chatgpt processed more requests, to offer various perspectives and reduce topics with which the chatbot refuses to work with.

Under the updated policy, ChatGpt can now generate and change images of Donald Trump, Elon Musk and other public figures that OpenAi did not allow before. Jang says that OpenAi does not want to be the referee of the status and choose who can and may not be generated by Chatgpt. Instead, the company gives users an opt-out option if they don’t want chatgpt to display them.

See also  AI investments surged 62% to $110B in 2024 while startup funding declined 12%, says Dealroom

In one white paper Released on Tuesday, OpenAi also said that it will enable Chatgpt users to “generate hateful symbols”, such as swastikas, in educational or neutral contexts, as long as they do not “endorse extremist agendas.”

Moreover, OpenAI changes how the “offensive” content defines. Jang says that Chatgpt used to refuse requests about physical characteristics, such as “Let this person look more Asian” or “make this person heavier”. In the testing of techcrunch we found the new Chatgpt image generator for these types of requests.

Moreover, Chatgpt can now imitate the styles of creative studios – such as Pixar or Studio Ghibli – but still limits that the styles of individual living artists imitate. As TechCrunch noted earlier, this could repeat an existing debate on the reasonably use of copyrighted works in AI training datasets.

It is worth noting that OpenAI does not fully open the locks for abuse. The native image generator of GPT-4O still refuses many sensitive questions, and in fact it has more guarantees about generating images of children than Dall-E 3, the previous AI image generator of Chatgpt, according to GPT-4O’s Whitepaper.

But OpenAi relaxes its crash barriers in other areas after years of conservative complaints about alleged AI “censorship” by Silicon Valley companies. Google was previously confronted with a recoil for Gemini’s AI image generator, who made multi -racial images for questions such as “American founders” and “German soldiers in the Second World War”, which were clearly inaccurate.

Now the cultural war around the AI ​​content is possible to head. Earlier this month, the Republican Congressman Jim Jordan sent questions to OpenAi, Google and other technical giants about potential collusion with the BIDEN administration to censor ai-generated content.

See also  Garth Brooks' 'Good Guy image' is at risk during a trial with a rape accuser

In an earlier statement to WAN, OpenAI rejected the idea that the changes in the content of the content were politically motivated. On the contrary, the company says that the shift reflects a “long -term conviction to give users more control” and the technology of OpenAi is now just good enough to navigate sensitive topics.

Regardless of the motivation, it is certainly a good time for OpenAI to change the policy of the content of the content, given the potential for regulatory investigation under the Trump administration. Silicon Valley Giants such as Meta and X have also adopted a similar policy, making more controversial topics on their platforms possible.

Although the new image generator of OpenAi has so far only made a few viral studio -Ghibli -Memes, it is unclear what the broader effects of this policy will be. Chatgpt’s recent changes can go well with the Trump administration, but have an AI chatbot answer that sensitive questions can land quickly enough in hot water.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button