To celebrate Global Accessibility Awareness Day, Apple announced a handful of forthcoming iOS features. One of these features really caught my eye because I think it gives us a tiny glimpse at what Apple’s platforms might look like in the future:

Coming later this year… those at risk of losing their ability to speak can use Personal Voice to create a synthesized voice that sounds like them for connecting with family and friends.

With Live Speech on iPhone, iPad, and Mac, users can type what they want to say to have it be spoken out loud during phone and FaceTime calls as well as in-person conversations… For users at risk of losing their ability to speak — such as those with a recent diagnosis of ALS… Personal Voice is a simple and secure way to create a voice that sounds like them.

Users can create a Personal Voice by reading along with a randomized set of text prompts to record 15 minutes of audio on iPhone or iPad. This speech accessibility feature uses on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.

Apple’s custom silicon expertise currently gives them a huge advantage when it comes to training and running personalized machine learning models locally on customer’s devices.

I would love to see a “Siri 2.0” that utilizes on-device language models. However, as we get closer to WWDC it has gotten increasingly clear that this is not the year for that. This year will almost certainly be dominated by an unveiling of their rumored XR headset. But even setting the headset aside, Apple tends to be very slow and methodical when it comes to large changes like the ones that would be required by a huge Siri revision.

Nevertheless, I expect to see more incremental progress towards expanding on-device AI models throughout all of Apple’s platforms. We just might have to wait until iOS 18 or 19 for the big stuff.