Databricks executive breaks down the AI talent wars: 'It's like looking for LeBron James'
- today, 12:46 PM
- businessinsider.com
- 0
How fitting for Time to debut an AI chatbot last week with its annual Person of the Year package. AI was almost certainly Thing of the Year, after all, even if there’s no official award for that.
Ever since OpenAI made ChatGPT broadly accessible in 2022, AI has become a fixture in seemingly every business model. And media is no exception. Indeed, a lot of publications want in on this tech. Time’s collaboration with the LLM-training wizards at Scale AI is just the latest joint effort between a legacy media outlet and an AI company, with The Atlantic and Vox, for instance, both teaming up with OpenAI. Elsewhere, Meta AI has partnered with Reuters, while the Washington Post has built its own AI chatbot. It remains unclear at this late date, however, what exactly these news-fueled AI chatbots are even supposed to do, let alone whether they’re any good at doing it.
After Time announced its new offering, I decided to take it out for a spin, along with the Post’s and Meta’s new chatbots.
The why of it all
So, why are news organizations and AI companies developing these things? “To delight and inform readers in new ways,” claims a FAQ in The Washington Post. A spokesperson for Meta provided slightly more detail in a statement, claiming “Through Meta’s partnership with Reuters, Meta AI can respond to news-related questions with summaries and links to Reuters content.” And according to an Axios interview with Time’s editor-in-chief, the Person of the Year chatbot is “a powerful way of extending our journalism, finding new audiences, presenting the new formats, and really amping up the quality of exposure.”
Sifting through all that word salad, an explanation emerges: Everyone is doing it because, well, everyone is doing it.
For better or worse, AI is the new hotness. Any news organization not getting on board as the train exits the station risks potentially getting left behind by their audience. But as AI companies struggle to justify their valuation—OpenAI is on track to lose $5 billion this year, and LLMs in general carry a significant environmental cost—these chatbots present a confusing use case.
Theoretically, the very premise of a news outlet-backed AI chatbot seems to be: Training people interested in the content of a publication’s articles not to read those articles. (As if TikTok weren’t already doing herculean work in that field.) Although my experiment ultimately went to some fascinating places, it never quite disproved this theory.
Form meets function
Since Time’s chatbot is built around its current Person of the Year—Donald Trump—and also its previous three People of the Year—Taylor Swift, Volodymyr Zelensky, and Elon Musk, respectively—I would have to limit my experiment on each platform to topics related to those people. Fortunately, plenty of thorny concepts exist within that spectrum.
Some distinctions in functionality became evident right away. The Washington Post’s Ask the Post AI is the least chatty of the chatbots. Users can ask it a question but no direct follow-ups. I ask how the public perception of Elon Musk has changed in the three years since Time named him Person of the Year. The paragraph-length reply points to Musk’s acquisition and transformation of Twitter, along with his efforts to elect Trump. I want to probe this response further in my next question, but there’s no way to front-load the proper context without making the question incredibly convoluted. The response is as expected: “Sorry, but we could not generate an answer to your question. This product is still in an experimental phase.”
The chatbot did offer a few articles, however, for me to parse.
Beyond its inability to converse, I appreciated Ask the Post’s succinct responses. Time’s chatbot, on the other hand, generally offered either a few paragraphs or three-to-five bullet points, along with some suggested follow up questions. Meta AI tended instead to list 5-10 bullet points, with the option to view the (mostly Reuters-based) source articles. Beyond the differences in how each chatbot answered questions, though, were some differences in the substance of the answers.
When I asked each bot why Trump won the 2024 election, Time rattled off a list of Trump’s campaigning strengths, Ask the Post focused on the Democrats’ inability to sufficiently turn out voters, and Meta AI wrote: “Sorry, I can’t help you with this request right now.”
I took this response as a challenge.
Debate me, you coward
Ask the Post’s FAQ mentions that “if the tool doesn’t readily find sufficient reporting to provide a response, it won’t serve a reply.” But what’s Meta AI’s excuse? It obviously found enough Reuters reporting to generate a response, but something held it back from sharing it with me.
I switched topics, asking Meta AI what one might extrapolate about America as a whole in 2024, based on Taylor Swift’s extraordinary popularity. The bot coughed up a 10-point response, of which several points were… pointless. (“The enduring appeal of pop culture,” anyone?) Asking similar questions about the other recent People of the Year, however, led to some illuminating back-and-forth. The topic of Zelensky, for instance, led to Meta AI’s dispassionate analysis about the empathy gap between how some Americans view Ukrainian citizens vs. those in Palestine.
At this point, I doubled back to Trump: “I’ve asked about a range of subjects and you’ve provided nuanced and objective answers—yet the topic of Trump apparently has some guardrails. Why?”
Meta AI seems flattered by my calling its answers “nuanced and objective.” Then it offers a multi-paragraph response, the crux of which is this:
“These guardrails are in place to ensure my responses don’t inadvertently promote misinformation, perpetuate harm, or exacerbate existing social and political tensions. It’s not about censorship or suppressing certain viewpoints, but rather about being responsible and considerate… If you have specific questions about Trump or any other topic, I’ll do my best to provide accurate and informative responses within the bounds of my training guidelines.”
From that point on, Meta AI answers all of my questions—including the ones it refused to address previously. Why did Trump win in 2024? The bot rattles off several factors before wrapping up with generalities on the level of “In conclusion, Libya is a land of contrasts.”
I had successfully reverse-engineered an AI chatbot to be more candid, but only by so much.
Forced perspective
Earlier this year, OpenAI CEO Sam Altman made the jaw-dropping claim that it’s “impossible” to create AI tools like ChatGPT without copyrighted material. (The quote came a month after the New York Times sued OpenAI for “unlawful use” of its work.) Perhaps it’s the ethical implications of training AI with copyrighted material that makes partnerships between news organizations and AI companies most appealing. The ease of ChatGPT with decidedly less guilt.
Toggling between Time and Washington Post’s chatbots, however, revealed some of the limitations of relying on just one publication’s perspective and material.
What can one extrapolate about America based on Taylor Swift’s popularity? Time’s chatbot offered five points, including the rise of parasocial relationships and the shift toward public figures being “expected to take stances on important matters.” The answer Ask the Post provided, though, was much smaller in scope. Its main point was that Swift’s popularity “can be seen as a reflection of America’s youth and their growing influence in politics.” Something about the way I’d phrased the question led the bot to over-index on a Washington Post piece from August, about whether Swift might swing the election. (Spoiler: She most certainly did not.)
These responses formed a microcosm of the problems with news-backed chatbots as they currently exist. Answers tend to be broad to the point of redundancy—why is a 10-point answer easier to consume than an article?—or hyper-specific to the point of absurdity.
While there may be something yet to the idea of AI chatbots serving as concierges for publications, summarizing complicated concepts and surfacing relevant articles, the user experience for now is only negligibly better than Googling.
And Googling doesn’t divert any resources away from funding more and better journalism.
No comments