During this announcement, Apple introduced a bunch of new features including a new ‘Personal Voice’ feature for people who may lose their ability to speak. This feature will allow iPhone or iPad users to have their devices speak in their voices within 15 minutes.
For users at risk of losing their ability to speak, the tech giant stated that the Personal Voice feature is a simple and secure way to create a voice that sounds like them.
Apple further shared that it will allow users to create a Personal Voice by reading along with a randomised set of text prompts to record 15 minutes of audio on an iPhone or iPad. “At the end of the day, the most important thing is being able to communicate with friends and family,” said Philip Green, board member and ALS advocate at the Team Gleason nonprofit.
“If you can tell them you love them, in a voice that sounds like you, it makes all the difference in the world — and being able to create your synthetic voice on your iPhone in just 15 minutes is extraordinary,” he added.
This upcoming speech feature is an accessibility tool that uses on-device machine learning to keep users’ information private and secure, and integrates seamlessly with Live Speech so users can speak with their Personal Voice when connecting with loved ones.
Apple is also introducing Point and Speak features in Detection Mode in Magnifier for those who are blind or have low vision. “Point and Speak in Magnifier makes it easier for users with vision disabilities to interact with physical objects that have several text labels. For example, while using a household appliance — such as a microwave — Point and Speak combines input from the camera, the LiDAR Scanner, and on-device machine learning to announce the text on each button as users move their finger across the keypad,” Apple said.