Final jobs report before election will likely offer a blurred view of labor market
- today, 12:10 AM
- abcnews.go.com
- 0
Twenty months into the generative AI revolution, conversational chatbots like ChatGPT are still seeking a killer use case.
It’s not from a lack of trying. Generative AI is everywhere, from bank chatbots to fast food drive thru screens, and a quarter of Americans have now used ChatGPT, according to the Pew Research Center.
It’s less clear, though, whether people are actually finding the tech very useful. While one in four Americans have used ChatGPT ever, just one in five use it in the workforce. And Goldman Sachs, which 15 months ago forecast generative AI would boost global GDP and productivity enormously, is now winding back some of its bullishness on the tech.
While people might welcome AI’s ability to push a PowerPoint presentation away from ‘90s-inherited clipart into something more modern, it’s less certain why we need an AI chatbot atop our Facebook Messenger screen.
We’re in the tricky phase of AI adoption, where big tech companies have spent millions—sometimes billions—of dollars developing their own tools or investing in others, and need to try and come up with reasons to justify that extensive, expensive investment. And so we’re left with products like LinkedIn’s AI-powered prompts that suggest questions related to the posts on your feed; Amazon’s Rufus AI assistant, which invites you to ask questions about products you’ve never shown an interest in; or Zoom’s new AI assistant notetaker, whose performance is so poor it would probably be fired if it was a human.
“This sort of pointless AI is really annoying, because it can really take away from what you expect when you go to a website,” says Catherine Flick, professor in ethics at Staffordshire University. “Companies are really trying to get as much out of these models as they can, while the hype is still high.”
Even for startups, there’s a financial incentive to embrace this AI-in-everything approach: Per Crunchbase, funding in AI startups doubled in the second quarter. The froth is fierce. “Having AI in the name makes it seem like they are incredibly sophisticated while they’re probably offering pretty much what every other cosmetic enhancement salon offers,” says Mike Katell, an ethics fellow at the Alan Turing Institute.
AI with everything doesn’t just affect the consumer experience; it’s likely having a meaningful effect on our climate. No one actually knows what impact it has—the companies involved won’t say—but third-party data suggests that it’s not good. One estimate pegged the environmental impact of training (not using) GPT-3 was around 500 tons of carbon dioxide (though that estimate is now quite out of date). In 2022, before the generative AI boom, Gartner forecast that AI tools would eventually use more energy than the entire human workforce.
“Inside the tech bubble, more AI is the most important thing in the world,” says Margaret Mitchell, chief ethics scientist at Hugging Face, which makes open-source AI tools it claims are less damaging to the environment. “Outside of that bubble, the fact that—for example—crops are dying due to the most extreme heat waves on record is a tad more important. Ironically, stuffing AI into everything makes the global warming issue worse. But they have air conditioning in that bubble”
Tech companies are playing with fire by putting such half-baked products out to the public, adds Flick, the Staffordshire ethicist. “It’s about trying to kind of capitalize on the fact that people aren’t quite sick of it just yet,” she says. “They’re still kind of not sure what it means or how to you know, how to engage with it effectively.”
But the less satisfying people’s experience with these AI-powered tools, the quicker they’re likely to tire of the tech—and that could happen long before it shows its full potential.
No comments