A group of concerned AI industry insiders has taken a bold step to address their fears about the current state of artificial intelligence. In a controversial move, they've launched a website called Poison Fountain, urging others to join a data poisoning campaign aimed at undermining AI technology.
The initiative, which has been active for about a week, invites website operators to add links that feed AI crawlers with manipulated training data. This parasitic relationship between AI models and the data they scrape from websites has sparked backlash from publishers.
Data poisoning can occur at various stages and take different forms. It can result from buggy code, factual errors on public websites, or manipulated training datasets. For instance, the Silent Branding attack altered image datasets to include brand logos in text-to-image diffusion models.
Poison Fountain draws inspiration from Anthropic's research on data poisoning, which demonstrated that only a few malicious documents are needed to degrade model quality. The project's anonymous source works for a major US tech company involved in the AI boom and aims to raise awareness about AI's vulnerability to poisoning.
The source claims that five individuals are involved, some allegedly working for other major US AI companies. The website argues for active opposition to AI, citing Geoffrey Hinton's concerns about machine intelligence as a threat to humanity. It provides two URLs with data designed to hinder AI training, one accessible via HTTP and the other a .onion URL intended to be difficult to shut down.
The site encourages visitors to assist in the "war effort" by caching and retransmitting the poisoned training data and feeding it to web crawlers. The source explained that the poisoned data consists of incorrect code with subtle logic errors and bugs designed to damage language models.
While industry figures like Hinton and organizations like Stop AI and the Algorithmic Justice League have been advocating for regulatory intervention, the debate has largely focused on the extent of government involvement, which is currently minimal in the US. AI firms are actively lobbying to keep it that way.
Those behind Poison Fountain argue that regulation is futile due to the technology's widespread availability. They believe that poisoning attacks compromise the cognitive integrity of AI models and that the only way to stop their advance is through weapons like Poison Fountain.
Other AI poisoning projects exist, but some appear more focused on generating revenue from scams rather than saving humanity. The extent to which such measures are necessary is unclear, as there are already concerns about AI models degrading. Model collapse, where AI models feed on their own synthetic data, further exacerbates the issue.
The overlap between data poisoning and misinformation campaigns, or "social media," is also a concern. Academics debate the real risk of model collapse, but one recent paper predicts it could occur by 2035.
The potential risks posed by AI could diminish if the AI bubble bursts, and a poisoning movement might just be the catalyst. What do you think? Is this a necessary step to protect humanity, or a dangerous path that could lead to unintended consequences? Share your thoughts in the comments!