Categories
iPhone News Patents

Apple patent application points to voice recognition/voice command technology in future versions of iOS

You’ve gotta love forthcoming versions of iOS.

Per freepatentsonline, future iPhone software could use the sound of someone’s voice to identify the person themselves, allowing the system to enact custom-tailored settings and access to personal content.

The concept was revealed this week in a new patent application published by the U.S. Patent and Trademark Office. Entitled “User Profiling for Voice Input Processing,” it describes a system that would identify individual users when they speak aloud.

Apple’s application notes that voice control already exists in some forms on a number of portable devices. These systems are accompanied by word libraries, which offer a range of options for users to speak aloud and interact with the device.

But these libraries can become so large that they can be prohibitive to processing voice inputs. In particular, long voice inputs can be time prohibitive for users, and resource taxing for a device.

Apple proposes to resolve these issues with a system that would identify users by the sound of their voice, and identify corresponding instructions based on that user’s identity. By identifying the user of a device, an iPhone would be able to allow that user to more efficiently navigate handsfree and accomplish tasks.

The application includes examples of highly specific voice commands that a complex system might be able to interpret. Saying aloud, “call John’s cell phone,” includes the keyword “call,” as well as the variables “John” and “cell phone,” for example.

In a more detailed example, a lengthy command is cited as a possibility: “Find my most played song with a 4-star rating and create a Genius playlist using it as a seed.” Also included is natural language voice input, with the command: “Pick a good song to add to a party mix.”

“The voice input provided to the electronic device can therefore be complex, and require significant processing to first identify the individual words of input before extracting an instruction from the input and executing a corresponding device operation,” the application reads.

To simplify this, an iPhone would have words that relate specifically to the user of a device. For example, certain media or contacts could be made specific to a particular user of a device, allowing two individuals to share an iPhone or iPad with distinct personal settings and content.

In recognizing a user’s voice, the system could also become dynamically tailored to their needs and interests. In one example, a user’s musical preferences would be tracked, and simply asking the system aloud to recommend a song would identify the user and their interests.

The proposed invention made public this week was first filed in February of 2010. It is credited to Allen P. Haughay.

One reply on “Apple patent application points to voice recognition/voice command technology in future versions of iOS”

Apple, which recently filed a patent application for a technology to keep screens on mobile devices free of fingerprints, is upping the ante by filing for a new application that could keep you fingers from even touching the screen in the first place.
The application is for what Apple calls User Profiling for Voice Input Processing, which it describes as being able to identify your voice and understand complex commands. Need to make a playlist? No problem, just ask. Need to call your friend? Just say so. The patent application says all these commands are possible: play, call, and search. According to the application, it would allow the user to “find my most played song with a 4-star rating and create a Genius playlist using it as a seed.”

Comments are closed.