How the Collapse of Sam Bankman-Fried's Crypto Empire Disrupted AI

How the Collapse of Sam Bankman-Fried’s Crypto Empire Disrupted AI

SAN FRANCISCO — In April, a San Francisco artificial intelligence lab called Anthropic raised $580 million for research into “AI security.”

Few people in Silicon Valley had heard of the year-long lab, which builds AI systems that generate language. But the amount of money promised to the small company dwarfed what venture capitalists were pouring into other AI start-ups, including those staffed with some of the most experienced researchers in the field.

The funding round was led by Sam Bankman-Fried, the founder and CEO of FTX, the cryptocurrency exchange that filed for bankruptcy last month. After FTX’s sudden collapse, a leaked balance sheet showed Mr. Bankman-Fried and his colleagues pumped at least $500 million into Anthropic.

Their investment was part of a quiet, fanciful effort to explore and mitigate the dangers of artificial intelligence, which many in Mr. Bankman-Fried’s entourage believe could eventually destroy the world and harm the world. humanity. Over the past two years, the 30-year-old entrepreneur and his colleagues at FTX have funneled more than $530 million — in grants or investments — into more than 70 AI-related companies, university labs, think tanks, independent projects and individual researchers to address concerns about the technology, according to a New York Times tally.

Now some of those organizations and people aren’t sure if they can keep spending that money, said four people familiar with the AI ​​efforts who weren’t authorized to speak publicly. They said they feared Mr Bankman-Fried’s fall would cast doubt on their research and undermine their reputation. And some of the AI ​​startups and organizations could eventually find themselves embroiled in FTX’s bankruptcy proceedings, with their grants potentially recoverable in court, they said.

Worries in the AI ​​world are unexpected fallout from FTX’s disintegration, showing how far the ripple effects of the crypto exchange’s collapse and Mr Bankman-Fried’s vaporized fortune have traveled .

“Some might be surprised by the connection between these two emerging areas of technology,” Andrew Burt, a Yale Law School attorney and visiting scholar who specializes in the risks of artificial intelligence, said of AI and technology. cryptography. “But below the surface, there are direct links between the two.”

Mr. Bankman-Fried, who is facing investigations into the FTX collapse and who spoke at The Times’ DealBook conference on Wednesday, declined to comment. Anthropic declined to comment on its investment in the company.

Mr. Bankman-Fried’s attempts to influence AI stem from his involvement in “effective altruism,” a philanthropic movement in which donors seek to maximize the long-term impact of their giving. Effective altruists are often preoccupied with what they call catastrophic risks, such as pandemics, biological weapons, and nuclear war.

Their interest in artificial intelligence is particularly acute. Many effective altruists believe that increasingly powerful AI can do good for the world, but fear that it will cause serious harm if it is not built in a safe way. While AI experts agree that any doomsday scenario is a long way off – if it happens at all – effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, businesses and governments should prepare for it.

Over the past decade, many effective altruists have worked at top AI research labs, including DeepMind, which is owned by Google’s parent company, and OpenAI, which was founded by Elon Musk and others. . They helped create a field of research called AI safety, which aims to explore how AI systems could be used to do harm or could malfunction unexpectedly on their own.

Effective altruists have helped conduct similar research in policy-shaping Washington think tanks. Georgetown University’s Center for Security and Emerging Technology, which studies the impact of AI and other emerging technologies on national security, was largely funded by Open Philanthropy, an effective altruistic giving organization supported by Facebook co-founder Dustin Moskovitz. Effective altruists also work as researchers in these think tanks.

Mr Bankman-Fried has been part of the effective altruism movement since 2014. Adopting an approach called earning to give, he told The Times in April that he deliberately chose a lucrative career so he could give much larger sums of money. .

In February, he and several of his FTX colleagues announced the Future Fund, which would support “ambitious projects to improve the long-term prospects of humanity.” The fund was led in part by Will MacAskill, one of the founders of the Center for Effective Altruism, as well as other key figures in the movement.

The Future Fund has pledged $160 million in grants to a wide range of projects by early September, including research into pandemic preparedness and economic growth. Approximately $30 million has been earmarked for donations to a range of organizations and individuals exploring AI-related ideas

Among the AI-related grants from the Future Fund, $2 million went to a little-known company, Lightcone Infrastructure. Lightcone runs the online chat site LessWrong, which in the mid-2000s began exploring the possibility that AI might one day destroy humanity.

Bankman-Fried and his colleagues have also funded several other efforts to mitigate the long-term risks of AI, including $1.25 million for the Alignment Research Center, an organization that aims to align future systems of AI on human interests so that the technology does not go rogue. They also donated $1.5 million for similar research at Cornell University.

The Future Fund has also donated nearly $6 million to three projects involving “great language models,” an increasingly powerful breed of AI that can write tweets, emails, and blog posts. blog and even generate computer programs. The grants were intended to mitigate how the technology could be used to spread misinformation and reduce unexpected and undesirable behavior from these systems.

After FTX filed for bankruptcy, Mr MacAskill and others who ran the Future Fund resigned from the project, citing “fundamental questions about the legitimacy and integrity of business operations” behind it. Mr. MacAskill did not respond to a request for comment.

Beyond grants from the Future Fund, Mr. Bankman-Fried and his colleagues have directly invested in start-ups with the $500 million in funding from Anthropic. The company was founded in 2021 by a group that included a contingent of efficient altruists who had left OpenAI. It strives to make AI safer by developing its own language models, which can cost tens of millions of dollars to build.

Some organizations and individuals have already received their funds from Mr. Bankman-Fried and his colleagues. Others got only part of what they were promised. Some are unsure whether the grants will have to be returned to FTX creditors, the four people with knowledge of the organizations said.

Charities are vulnerable to clawbacks when donors go bankrupt, said Jason Lilien, a partner at charity law firm Loeb & Loeb. Businesses that receive venture capital investments from failing companies may be in a somewhat stronger position than charities, but they are also vulnerable to clawback claims, he said.

Dewey Murdick, director of the Center for Security and Emerging Technology, the Georgetown think tank supported by Open Philanthropy, said effective altruists have contributed to important research involving AI.

“Because they’ve increased their funding, it’s increased the attention to these issues,” he said, citing the fact that there’s more talk about how AI systems can be designed with safety in mind.

But Oren Etzioni of the Allen Institute for Artificial Intelligence, an AI lab in Seattle, said the views of the effective altruistic community are sometimes extreme and often make today’s technologies more powerful. or more dangerous than they actually were.

He said the Future Fund had offered him money this year for research that would help predict the arrival and risks of “artificial general intelligence”, a machine that can do anything the human brain can. TO DO. But that idea isn’t something that can be reliably predicted, Etzioni said, because scientists don’t yet know how to construct it.

“These are smart, sincere people putting dollars into a highly speculative business,” he said.

#Collapse #Sam #BankmanFrieds #Crypto #Empire #Disrupted

Leave a Comment

Your email address will not be published. Required fields are marked *