For AI to transform public safety, AI itself must also be safe

Artificial intelligence tools are changing myriad aspects of modern life with dizzying speed, from business insights and public safety to social media and society at large. And we know that for all AI has changed so far, there is far more change to come.

Therefore, this is the moment to grapple with the big questions. What is responsible AI? What does it mean to design an ethical algorithm? How can we design for the greater good today to ensure a safer tomorrow?

The answers are not simple, but they are essential, agreed a panel of experts during a conversation presented by Motorola Solutions at Fast Company’s Most Innovative Companies Summit in May. Here are three key takeaways from their discussion. (Scroll to the bottom to watch the entire panel discussion.)

1. Notions of trust depend heavily on context.What does it mean for AI to be ethical, responsible, safe, and trustworthy? The answers are extremely nuanced, which makes it difficult to debate on a conceptual level devoid of specific context, said Michael Kearns, Amazon scholar and professor of computer and information science at the University of Pennsylvania.

“The generative AI era has thrown the community for a loop in many ways because of the open-endedness of the output,” Kearns said. “It’s very hard to define these things absent any specific use cases, as it depends very greatly on what I’m using it for.”

Kearns offered a simple example related to toxic language: If a person were using AI as a writing aid for a children’s book, the tolerance for toxicity would be zero. But if they were using it to write historical fiction, there’s more latitude.

Context is not only situational, however. It also applies to the current emotional state of the humans seeking to leverage an AI tool, added Mahesh Saptharishi, executive vice president and chief technology officer of Motorola Solutions.

“Folks in public safety say their jobs are hours of boredom punctuated by moments of terror,” Saptharishi said. “So, the same set of tools must serve this customer base who are protecting our communities while they’re relaxed—perhaps almost bored, fighting complacency—and then again when they’re incredibly stressed.”

Motorola Solutions must reckon with this truth when developing their AI tools for public safety uses, such as automatic translation between 911 operators and callers who don’t speak the same language. They call this design principle high-velocity human factors: the need to create tools that will still be simple and effective when users are under extreme stress.

2. Operationally, AI is a unique category of traditional risk management.“For me, a responsible AI is a subset of traditional risk management,” said Patrick Huston, U.S. Army brigadier general (Ret.) and member of the FBI’s AI Task Force. “We call it out and give it its own name, not only because AI is newer—but because it carries some unique risks.”

In Huston’s view, AI poses five unique risks:

  1. It’s opaque, in that the driving forces of AI’s decision-making and other processes are not often transparent.
  2. It’s ever-changing because AI is constantly learning and evolving based on data.
  3. It can be biased, as it’s designed and trained by humans who may inadvertently introduce that bias.
  4. It can hallucinate inaccurate information for reasons that are often unclear.
  5. It can displace some types of jobs, which makes a cohort of people inherently distrustful.

These challenges are difficult, but they must be reckoned with and mitigated as much as possible if people are to place trust in AI, the panelists agreed.

“There’s a lot that we know scientifically, but we need to remember that these tools in these systems are technological artifacts of our own making,” Kearns said. “The first-line solution is to design that technology so that these problems are mitigated or avoided. . . . It’s much better to fix the problem in the first place, than it is to litigate or regulate it after harms have been inflicted.”

3. The most powerful potential lies in AI-human partnerships.Asked about AI’s most powerful use cases in public safety, Saptharishi sees two broad categories. The first is when AI can remove the friction in connecting people, such as automatic language translation during 911 calls.

The second category is connecting the right people to the right information in times of need. Saptharishi shared a real-world example: A child with special needs went missing in a northeastern U.S. city, and law enforcement officials had footage from hundreds of video cameras enabled by AI analytics.

“This city uses Motorola Solutions technologies across the board, so they were able to search this footage by description of the child—and within a matter of minutes, our system could tell them where the child wasn’t,” Saptharishi said. “That way, they focused human attention on where the child could be. Effectively, AI removed the haystack, and allowed humans to find the needle.”

The child was located quickly and safely reunited with their parents—a joint effort of AI and manpower.

“The secret is to combine humans and machines in ways that leverage the respective strengths of each,” Huston said. “People will present it sometimes as if it’s an either/or choice: ‘You must choose the human or the machine, so which one is it going to be?’ That’s a false dichotomy. You can have both, and you can have the best of both.”

Watch the full panel:

No comments

Read more