Trump addresses Elon Musk's growing political influence: 'He's not going to be president'
- today, 3:52 PM
- nbcnews.com
- 0
More than a dozen years ago, U.K.-based game producer Helen King was about to take a new job at Ubisoft in Canada. She had already boxed up her possessions and was getting her visa in order when an opportunity at a London-based AI startup unexpectedly presented itself. King found it irresistible, even though she had no background in the technology—she’d even opted out of AI courses as a computer science student.
“Some of my friends still haven’t forgiven me,” she jokes. “Because I then had to say, ‘I’m not moving to Canada, and I’m joining a company that’s in stealth mode, so I can’t actually tell you what I’m doing.’”
The company in question was AI startup DeepMind, and its founders, Demis Hassabis, Shane Legg, and Mustafa Suleyman, enthralled King with the audacity of their vision. “They talked about wanting to solve intelligence and the opportunities that would come with that,” such as breakthroughs in cancer research, she remembers. As a program manager for research, she chipped in on everything from recruitment to running conferences.
Of course, DeepMind didn’t stay tiny and stealthy forever. King was there when it was acquired by Google in 2014 and made headlines with research breakthroughs such as AlphaGo and AlphaFold. In 2023, Google merged DeepMind with another one of its AI research arms, Google Brain, to form Google DeepMind (GDM). It put Hassabis in charge of the combined operation, whose technologies include the Gemini Large Language Models at the heart of many of Google’s AI advances. Work also continues on longstanding projects such as AlphaFold, whose protein-prediction AI is now at the heart of Alphabet’s drug discovery startup Isomorphic Labs.
Today, King is GDM’s senior director of responsibility and a strategic advisor to research. They’re particularly weighty jobs given Google’s massive scale: Six of its products have two billion users apiece, nine have a billion, and 15 have half a billion. Offerings such as Google Search and Gmail have been part of the fabric of life and work for many years, magnifying AI’s potential benefit but also the impact of any glitches.
Unlike AI startups, Google also has a reputation to preserve and paying customers who prize dependability above raw innovation, raising the stakes even further. “The benefit of being known as being trustworthy and safe, and all of these things, comes with the challenge of an expectation that it carries through, even when it’s experimental and in early products,” says King.
Though the torrid pace of Google’s recent AI announcements may feel like a response to the era OpenAI unleashed by introducing ChatGPT two years ago, the company been preparing for this moment long ago. Back in 2018. DeepMind formed a Safety and Responsibility Council, whose membership included senior leaders from across the organization. The group continues to play a core role at GDM, providing input on new research efforts from the start. King says that the goal is to encourage ongoing conversations between Council members and technologists working on particular projects, giving everyone involved sufficient time to think matters through: “It’s not a black box experience for the team, and I think that really helps bring them on the journey.”
A commitment to openness also explains why Google DeepMind shares its learnings about AI ethics in research papers, such as a recent one on advanced assistants that lists King among its authors. “We see ourselves as leaders in safety and responsibility, and providing that sort of thought leadership is also important,” she says. “It’s not just how do we internally ensure safe and responsible models, but also how do we ensure in the broader research community that that’s happening.”
None of the safety measures Google has in place have prevented a few embarrassing AI-related mishaps, such as the AI Overviews in Search getting some facts really, really wrong. In part, that reflects the many moving parts associated with turning GDM’s research and underlying technologies into working products, a process that spreads responsibility among many stakeholders. King’s team focuses on “anything that is a GDM project or a GDM product,” she says. “That is where we tend to be involved. Not in the search algorithms themselves.”
Still, she acknowledges that the people who rely on Google’s AI-infused tools don’t draw distinctions between the myriad teams behind them and their varying roles in keeping them safe. Indeed, the human-like veneer of current and future AI experiences may cause users to think of them as personifications of Google in a way that’s new.
”The LLMs are implicitly being seen as the voice of whichever company the LLM is [associated] with,” she says. “I don’t think that was ever an intended behavior, in the same way I don’t think someone says Google search represents the voice of Google. But it’s really interesting that for LLMs, that’s where everyone in the world has gone.”
This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.
No comments