Nissan is slashing 9,000 jobs after a dire financial performance
- today, 3:29 AM
- businessinsider.com
- 0
The most exciting Apple Intelligence feature has nothing to do with generating canned text or impromptu emoji.
While you’ll be able to do both of those things in iOS 18, Apple’s also playing a longer game, building a genuine AI platform for third-party apps. Using an overhauled Siri, iOS apps will soon let you accomplish pretty much any action with just a voice command, even when that action involves more than one app.
Apple has spent years laying the groundwork for this ecosystem, gradually giving developers more ways to hook their apps into Siri. With all the hype around generative AI and large language models, it’s finally in position to capitalize on the investment—and other tech giants such as Google and Amazon will have a tough time catching up.
All about App Intents
The key to Siri’s upcoming overhaul is a developer feature called “App Intents,” which lets app makers define what actions their users can take and exposes them to other parts of iOS. If a to-do list app supports App Intents, for instance, you may be able to search for agenda items in the Spotlight search menu, or add tasks as part of a routine in the Shortcuts app.
Apple introduced App Intents two years ago with iOS 16, but until now it’s only told developers to expose the most common actions that users might take. In iOS 18, it’s asking developers to expose everything that their apps are capable of doing.
Siri will in turn support all of those App Intents as voice commands, and will use improved AI models to better understand what users are trying to accomplish. Siri will also recognize what’s on-screen, so you can ask it to act on a photo or the contents of an email.
The result will be a significant expansion for voice control on iOS with no need to use robotic syntax or clunky Shortcuts routines, and more confidence that if you ask Siri to do something, it’ll actually know how to do it.
“It’s very, very exciting,” says Amir Salihefendic, the CEO of Doist, which makes the popular task management app Todoist. “They definitely have the right architecture for this.”
With the current version of Todoist, you can only ask Siri to create new tasks or reminders. Salihefendic expects you’ll be able to do more with voice control in iOS 18, such as creating new projects, assigning tasks, and checking off completed items.
“We intend to expose all of the functionality and make that available to [Siri],” he says. “And that means you can have some incredible, advanced workflows.”
Denys Zhadanov, a board member for Readdle, says Apple Intelligence will “unlock a plethora of use cases” for the company’s apps, which include Spark Mail, Documents, Calendars, PDF Expert and Scanner Pro. Readdle is planning to have every action in these apps to be controllable via Siri.
“[I]f done right, you can have an augmented personal assistant while searching for email, sending replies, generating the body of the email itself, etc,” Zhadanov says via email.
Will app makers bite?
With any new iOS update, there’s always a cadre of developers who will leap to support the latest features. But if you’ve ever dabbled in creating iOS Shortcuts or tried looking for something in Spotlight search, you know that some app makers don’t bother. The big challenge ahead for Apple will be to get a critical mass of developers to go all-in on App Intents.
Matthew Cassinelli, a writer and consultant on iOS automation who worked on the app that eventually became Shortcuts, believes Apple will succeed because of all the years it spent laying the ground work.
After all, App Intents aren’t just about Siri. They’re also an essential building block for other iOS features, such as widgets, Spotlight search, Live Activities, and iOS 18’s expanded Control Center and lock screen toggles. Developers that support these features will also be expanding Siri by extension.
“They’ve done a very good job of seeding this to developers in a way that’s key to their entire ecosystem and how it runs,” Cassinelli says.
Apple’s also going to benefit from the ongoing AI hype wave. Instead of just promoting a better version of Siri, Apple can wrap it in the broader “Apple Intelligence” branding, with a marketing push aimed at both users and app makers. Cassinelli says he’s already seeing a change in attitude toward App Intents from the latter group.
“They announced Apple Intelligence stuff, and since then, every app developer’s like, ‘Alright, I’m doing this now,'” he says.
A head start
None of the people I spoke with expect Apple’s execution to be flawless, and its new version of Siri will still have limitations.
For one thing, not every app will be able to use expanded App Intents at the outset. In iOS 18, Apple has trained and optimized Siri for 12 domains: Books, spreadsheets, camera, presentations, file management, photos, mail, documents, browsing, word processing, whiteboards, and journals.
Danilo Bonardi, the CEO and co-founder of Shiny Frog, which makes the notetaking app Bear, is also bummed about one lingering limitation: You’ll still need to include the name of the app you want to interact with in voice commands. “This adds a lot of friction to Siri,” he says.
But even if Apple has more work to do, it still has a big lead over other AI assistants in creating an ecosystem for third-party developers.
Google, for instance, has an extension system for its Gemini assistant, but it only works with six Google services such as YouTube and Gmail. The company did not announce third-party extension support at its I/O conference this month, and it has deprecated third-party “Conversational Actions” for Google Assistant devices. The company has a separate “App Actions” feature for Android apps, but it never got much developer traction, and it’s tied to Google Assistant, which itself seems to be on the way out. The whole situation’s a mess.
Amazon, meanwhile, is backing away from its third-party Alexa Skills ecosystem, killing off free Amazon Web Services credits for developers and rewards for top-performing skills. The company is in the midst of overhauling Alexa around generative AI, and is rebooting its developer tools accordingly.
Even OpenAI has floundered in its efforts to build a third-party ecosystem around ChatGPT. The company scrapped its first attempt at a plug-in system, which never worked well to begin with, and it’s unclear if the [new “GPTs” system]() will fare any better.
As Matthew Cassinelli points out, those companies must also still figure out how to developers can monetize virtual assistant support. That’s not a problem for Apple, whose developers are simply building out Siri support atop their existing apps.
“Apple’s ability just to let you make money on the App Store alone, it’s very powerful compared to something like Google Assistant or Alexa, where they’d have to start a whole business just for that sort of thing,” he says.
Doist’s Salihefendic has been burned by virtual assistant dead-ends before. The company killed off Todoist’s Google Assistant integration two years ago due to Google SDK changes, with Salihefendic noting that the platform “wasn’t very smart” to begin with. It’s also shutting down Alexa support on July 1, as Amazon is deprecating the APIs that Todoist relied on. Salihefendic says that platform “was kind of a failed attempt.”
“Siri has also been a joke,” Salihefendic says, and while he’s not fully convinced that the overhaul won’t be Alexa all over again, he’s also excited about what Apple Intelligence might enable. By giving developers the tools they need to build powerful voice controls in to their apps, Siri might not be a punchline anymore.
No comments