China's internet is upset that a knock-off of its darling video game, 'Black Myth: Wukong,' is listed on Nintendo's store
- today, 12:36 AM
- businessinsider.com
- 0
The pipeline from OpenAI to Anthropic remains strong as ever, with cofounder Durk Kingma becoming the latest high-profile figure to make the move. On Tuesday, Kingma, one of the lesser-known cofounders of OpenAI, joined Anthropic, best known for developing the rival AI chatbot Claude.
In a series of posts on X, Kingma revealed that he’ll be working mostly remotely, from the Netherlands (where he’s based), but didn’t say which Anthropic team he’ll be joining—or leading.
“Anthropic’s approach to AI development resonates significantly with my own beliefs; looking forward to contributing to Anthropic’s mission of developing powerful AI systems responsibly,” Kingma wrote on X. “Can’t wait to work with their talented team, including a number of great ex-colleagues from OpenAI and Google, and tackle the challenges ahead!”
Personal news: I'm joining @AnthropicAI! 😄 Anthropic's approach to AI development resonates significantly with my own beliefs; looking forward to contributing to Anthropic's mission of developing powerful AI systems responsibly. Can't wait to work with their talented team,…— Durk Kingma (@dpkingma) October 1, 2024
Personal news: I'm joining @AnthropicAI! 😄 Anthropic's approach to AI development resonates significantly with my own beliefs; looking forward to contributing to Anthropic's mission of developing powerful AI systems responsibly. Can't wait to work with their talented team,…
Kingma holds a PhD in machine learning from the University of Amsterdam and spent several years as a doctoral fellow at Google before joining OpenAI’s founding team as a research scientist. After leading OpenAI’s algorithms team, Kingma joined Google Brain (later merged with DeepMind) in 2018, where he focused on generative models for text, image, and video.
Kingma follows a trio of leaders who announced their departures last week: chief research officer Bob McGrew; VP of research Barret Zoph; and chief technology officer Mira Murati, who briefly took over as interim CEO when the board fired Sam Altman last November.
As of now, only three members of the original OpenAI founding team remain: CEO Sam Altman, research scientist Wojciech Zaremba, and company president Greg Brockman (who is currently on sabbatical until the end of the year).
Anthropic, founded in 2021 by seven former OpenAI employees, aims to position itself as more safety-centered alternative to OpenAI. CEO Dario Amodei, previously VP of research at OpenAI, split from the company due to its growing commercial focus. Amodei brought with him a number of ex-OpenAI employees to launch Anthropic, including OpenAI’s former policy lead Jack Clark.
Anthropic has since recruited over five former OpenAI employees, including fellow cofounder John Schulman, who left this past August, and former safety lead Jan Leike, who resigned in May. Many former employees cite safety as a primary concern.
Leike, who was part of a team that focused on the safety of future AI systems, expressed his disagreement with OpenAI’s leadership priorities and said that these issues had reached a “breaking point.”
“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote on X.
But over the past years, safety culture and processes have taken a backseat to shiny products.— Jan Leike (@janleike) May 17, 2024
But over the past years, safety culture and processes have taken a backseat to shiny products.
OpenAI’s former chief scientist Ilya Sutskever co-led the company’s Superalignment team with Leike and also resigned in May. He backed Altman’s brief outster in November, though later expressed regret over that support. Since his resignation, he has founded and raised a $1 billion round for his new startup, Safe Superintelligence, which aims to produce superintelligence while emphasizing safety.
Earlier this year in June, 13 ex-OpenAI and Google Deepmind employees warned in an open letter that advanced AI companies, like OpenAI, stifle criticism and oversight. They warned that current whistleblower protections “are insufficient” because they focus on illegal activity rather than concerns that are mostly unregulated—namely AI safety. They called for frontier AI companies to provide assurances that employees will not be retaliated against for responsibly disclosing risk-related concerns.
Reminiscent of the Paypal mafia, a variety of company departures, like Sutskever’s, have spawned into major tech startups and OpenAI competitors.
Former OpenAI researcher Aravind Srinivas is one of the cofounders of Perplexity, the AI-powered search engine. Cresta, a generative AI service for contact centers and sales employees, was founded by former technical staff member Tim Shi, while Daedalus, which builds AI-powered precision manufacturers, was founded by former tech lead Jonas Schneider. Gantry, a machine learning infrastructure company, was founded by former OpenAI research scientist Josh Tobin.
“You’ve got 20-30,000 companies [across Silicon Valley] that have exploded because more people were willing to leave the lab,” says HP Newquist, executive director of the Relayer Group and an AI historian. “It’s a core group of people who are really able to understand how this all works.”
This past Tuesday, OpenAI finalized a deal to receive $6.6 billion in new funding from investors, who valued the company at $157 billion.
No comments