Interior designers share 5 bedroom trends they think will be huge next year and 4 on their way out
- today, 7:26 AM
- businessinsider.com
- 0
As she was finishing up her undergraduate degree at Princeton University, Rebecca Portnoff was trying to figure out what was next: more school, or go work for a tech company?
She studied computer science, doing her dissertation on natural language processing. This was over a decade ago, when artificial intelligence wasn’t the buzzword it is today but still held scores of promise and excitement for those working with it.
Around the same time, she picked up a copy of Half the Sky, a book by Nicholas Kristof and Sheryl WuDunn about human rights abuses against women across the world. The book, recommended by her sister, ended up leading her to the groundbreaking path she’s on today at the nonprofit Thorn.
“I decided that I wanted to make an impact in this space, but didn’t really know what that looked like as someone with a machine learning and computer science background, and figured I would have a better chance of answering that question as a graduate student than working full-time at a tech company,” Portnoff tells Fast Company.
Portnoff completed her PhD at U.C. Berkeley, and spent her time learning about the impacts of child sexual abuse and what efforts are in place to combat it.
Flash forward to today, Portnoff is the vice president of data science at Thorn, a nonprofit cofounded by Demi Moore and Ashton Kutcher that uses tech to fight child exploitation. Her team works to identify victims, stop revictimization, and prevent abuse from occurring in the first place using machine learning and artificial intelligence.
For all the ways tech can fight child sexual abuse, it also can amplify it. For example, bad actors could use generative AI to create realistic child sexual abuse material. Portnoff is leading an initiative with the nonprofit All Tech Is Human that works with tech giants to put new safety measures in place to prevent certain misuse cases. She also led Thorn and All Tech Is Human’s Safety by Design initiative last year, which works to encourage tech companies to develop their AI with the intent to combat child sexual abuse from the start, rather than retrofit the tech later once issues arise.
Amazon, Anthropic, Google, Meta, OpenAI, Microsoft, and a handful of other companies have pledged to adopt Safety by Design principles as part of the project. For example, OpenAI integrated part of the tech into its DALL-E 2 generative AI web app.
“As far as where things need to go, or where things will be headed with the Safety by Design work and preventing the misuse of some of this, I know that there are days where I feel really hopeful with how the ecosystem has moved to try to mitigate this,” Portnoff says. “And there are also days where I feel it seems like we haven’t moved fast enough. At the end of the day there are going to be companies and developers that work to prevent this misuse, and there will be those that do not, and so there is going to need to be legislation that comes into play when it comes to bringing along that full ecosystem.”
This story is part of AI 20, our monthlong series of profiles spotlighting the most interesting technologists, entrepreneurs, corporate leaders, and creative thinkers shaping the world of artificial intelligence.
No comments