AI slop and fake reports are coming for your bug bounty programs

The so-called AI slop, which means that LLM-generated images of low quality, videos and text have taken over the internet in recent years, polluting websites, Social media platformsat least One newspaperAnd even Real-World events.
The world of cyber security is also not immune to this problem. In the past year, people in the cyber security industry have expressed their concern about AI Slop Bugy reports, which means that reports that claim to have found vulnerabilities that did not really exist, because they were made with a large language model that simply formed the vulnerability, and then packed in a professional-rich description.
“People receive reports that sound reasonable, they look technically correct. And then you will dig into it and try to find out:” Oh no, where is this vulnerability? “,” Vlad Ionescu, the co-founder and CTO van van RunsybilA startup that develops AI-driven bugjagers, WAN told.
“It turns out that it was just a hallucination. The technical details were just made up by the LLM,” said Ionescu.
Ionescu, who used to work with the Red Team of Meta in charge of hacking the company from the inside, explained that one of the problems is that LLMS was designed to be useful and to give positive answers. “If you ask for a report, it will give you a report. And then people will copy and paste these in the Bugbounty platforms and overwhelm the platforms themselves, overwhelm the customers, and you will be in this frustrating situation,” Ionescu said.
“That is the problem that people come across is that we get a lot of things that look like gold, but it is actually just nonsense,” Ionescu said.
There have been real examples of this in the past year. Harry Sintonen, a security researcher, revealed that the Open Source Security Project Curl received a fake report. “The attacker has calculated poorly,” wrote Sintonen In a post on Mastodon. “Curl can smell ai from kilometers away.”
In response to the post of Sintonen, Benjamin Piouffle from Open Collective, a technical platform for non -profit organizations, said That they have the same problem: that their inbox “is flooded with AI waste.”
An Open Source developer, who maintains the CyclonedX project on Github, their bug -bounty pulled down completely Earlier this year after receiving “Almost complete AI Slop reports.”
The leading Bug-Bounty platforms, which essentially work as intermediaries between Bug-Bounty-Hackers and companies who are willing to pay and reward them for finding errors in their products and software, also see a peak in AI-generated reports, has learned TechCrunch.
Contact us
Do you have more information about how AI influences the cyber security industry? We would like to hear from you. From a non-work equipment and network you can contact Lorenzo Franceschi-Bicchierai Veilig on Signal on +1 917 257 1382, or via Telegram and Keybase @lorenzofb or e-mail.
Michiel Prins, co-founder and senior director of product management at Hackerone, told WAN that the company has encountered an AI slop.
“We have also seen an increase in false positives that really seem but are generated by LLMS and have no impact,” said Prins. “These submissions with low signals can create noise that undermine the efficiency of security programs.”
Prins added that reports that “hallucinated vulnerabilities, vague technical content or other forms of low noise are treated as spam.”
Casey Ellis, the founder of Bugcrowd, said there are certainly researchers who use AI to find bugs and write the reports that they then subject to the company. Ellis said they see a general increase of 500 entries per week.
“AI is widely used in most entries, but it has not yet caused a significant peak in ‘slop’ reports of low quality,” Ellis told WAN. “This will probably escalate in the future, but it is not yet here.”
Ellis said that the Bugcrowd team that analyzes submissions manually assesses the reports using established Playbooks and Workflows, as well as with machine learning and AI “assistance”.
To see if other companies, including those that carry out their own BUG premium programs, also receive an increase in invalid reports or reports with non-existent vulnerabilities hallucinated by LLMS, WAN contacted Google, Meta, Microsoft and Mozilla.
Damiano Demonte, a spokesperson for Mozilla, who develops the Firefox browser, said that the company “has not seen a substantial increase of invalid or low quality bug reports that appears to be by AI-generated”, which means that reports of the rejection of reports are marked as ranges of the day, or six in the ranges of the ranging of the ranges of the ranges of the ranges of the ranges of the rose of the ranges of a month.
Mozilla’s employees who assess bug reports for Firefox do not use AI to filter reports, because it would probably be difficult to do this without the risk of refusing a legitimate bug report, ”Disonte said in an e -mail.
Microsoft and Meta, companies that both have gambled heavily on AI, refused to comment. Google did not respond to a request for comment.
Ionescu predicts that one of the solutions to the problem of the rising AI slop will be to keep investing in AI-driven systems that can at least perform a provisional assessment and filter submissions for accuracy.
In fact, on Tuesday, Hackerone launched Hai Triage, a new trio system that combines people and AI. According to Hackerone, this new system uses “AI security agents to cut noise, flag duplicates and give priority to real threats.” Human analysts then step in to validate and escalate the bug reports if necessary.
Because hackers are increasingly using LLMS and companies are trusting on AI to triage those reports, it is still to be seen which of the two AIS will prevail.



