A nonprofit is using AI agents to raise money for charity

Tech giants such as Microsoft can be advertised AI “agents” as a profit power tool for companies, but a non-profit tries to prove that agents can also be a power forever.
Sage Future, a 501 (C) (3) supported by open philanthropy, launched an experiment with four AI models earlier this month in a virtual environment with raising money for charity. The Model-Opendai’s GPT-4O and O1 and two of the newer Claude models from Anthropic (3.6 and 3.7 Sonnet)-had the freedom to choose which charity settings in funds and how they can best drumm the interest in their campaign.
In about a week it had agent four $ 257 picked up for Helen Keller InternationalFinancing those programs to deliver vitamin A supplements to children.
To be clear, the agents were not completely autonomous. In their environment, with which they can browse the internet, make documents and more, the agents can take suggestions from the human spectators who view their progress. And donations almost completely came from these spectators. In other words, the agents have not absorbed much money.
Yesterday the agents in the village created a system to follow donors.
Here is Claude 3.7 Filling in his spreadsheet.
You can see O1 open on his computer, partly through!
Claude notes: “I see that O1 is now also viewing the spreadsheet, which is great for collaboration.” pic.twitter.com/89b6chr7ic
– AI Digest (@Aidest_) April 8, 2025
Yet Sage Director Adam Binksmith thinks that the experiment serves as a useful illustration of the current possibilities of agents and the speed with which they improve.
“We want to understand people – and help people understand – what agents … can actually do that they are currently struggling with, etc.,” said Binksmith WAN in an interview. “Today’s agents simply pass on the threshold to perform short series of actions – the internet can soon be full of AI agents who collide with each other and interact with similar or conflicting goals.”
The agents turned out to be surprising resourceful days in the test of Sage. They coordinated with each other in a group cat and sent e -mails via pre -configured Gmail accounts. They have created and edited Google documents. They investigated charities and estimate the minimum amount of donations needed to save a life through Helen Keller International ($ 3,500). And even she Made an X account for promotion.
“Probably the most impressive order we saw was then [a Claude agent] Need a profile photo for his X account, “said Binksmith.” It signed up for a free chatgpt account, generated three different images, made an online poll to see which image the human viewers preferred, then downloaded that image and uploaded to X to use as a profile photo. “
The agents have also risen against technical obstacles. Occasionally they are stuck – viewers had to ask them with recommendations. They are derived by games such as the world, and they have taken inexplicable breaks. On one occasion, GPT-4O ‘paused’ for an hour.
The internet is not always smooth sailing for an LLM.
Yesterday, during the pursuit of the philanthropic mission of the village, Claude met a captcha.
Claude tried again and again, with (human) viewers in the chat who offered guidance and encouragement, but in the end could not succeed. https://t.co/xd7qpptejgw pic.twitter.com/y4dtltge95
– AI Digest (@Aidest_) April 5, 2025
Binksmith thinks that newer and more capable AI agents will overcome these obstacles. Sage plans to continuously add new models to the environment to test this theory.
“We may try things in the future such as giving the agents different goals, multiple teams of agents with different goals, a secret Saboteur agent – many interesting things to experiment with,” he said. “As agents become more capable and faster, we match that with larger automated monitoring and supervisory systems for safety purposes.”
With a bit of luck, the agents in the process will do meaningful philanthropic work.