The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis
(Text-to-Speech), and SpeechRecognition
(Asynchronous Speech Recognition.)
The Web Speech API enables you to incorporate voice data into web apps. The Web Speech API has two parts: SpeechSynthesis
(Text-to-Speech), and SpeechRecognition
(Asynchronous Speech Recognition.)
The Web Speech API makes web apps able to handle voice data. There are two components to this API:
SpeechRecognition
interface, which provides the ability to recognize voice context from an audio input (normally via the device's default speech recognition service) and respond appropriately. Generally you'll use the interface's constructor to create a new SpeechRecognition
object, which has a number of event handlers available for detecting when speech is input through the device's microphone. The SpeechGrammar
interface represents a container for a particular set of grammar that your app should recognize. Grammar is defined using JSpeech Grammar Format (JSGF.) SpeechSynthesis
interface, a text-to-speech component that allows programs to read out their text content (normally via the device's default speech synthesizer.) Different voice types are represented by SpeechSynthesisVoice
objects, and different parts of text that you want to be spoken are represented by SpeechSynthesisUtterance
objects. You can get these spoken by passing them to the SpeechSynthesis.speak()
method. For more details on using these features, see Using the Web Speech API.
SpeechRecognition
The controller interface for the recognition service; this also handles the SpeechRecognitionEvent
sent from the recognition service.
SpeechRecognitionAlternative
Represents a single word that has been recognized by the speech recognition service.
SpeechRecognitionErrorEvent
Represents error messages from the recognition service.
SpeechRecognitionEvent
The event object for the result
and nomatch
events, and contains all the data associated with an interim or final speech recognition result.
SpeechGrammar
The words or patterns of words that we want the recognition service to recognize.
SpeechGrammarList
Represents a list of SpeechGrammar
objects.
SpeechRecognitionResult
Represents a single recognition match, which may contain multiple SpeechRecognitionAlternative
objects.
SpeechRecognitionResultList
Represents a list of SpeechRecognitionResult
objects, or a single one if results are being captured in continuous
mode.
SpeechSynthesis
The controller interface for the speech service; this can be used to retrieve information about the synthesis voices available on the device, start and pause speech, and other commands besides.
SpeechSynthesisErrorEvent
Contains information about any errors that occur while processing SpeechSynthesisUtterance
objects in the speech service.
SpeechSynthesisEvent
Contains information about the current state of SpeechSynthesisUtterance
objects that have been processed in the speech service.
SpeechSynthesisUtterance
Represents a speech request. It contains the content the speech service should read and information about how to read it (e.g. language, pitch and volume.)
SpeechSynthesisVoice
Represents a voice that the system supports. Every SpeechSynthesisVoice
has its own relative speech service including information about language, name and URI.
Window.speechSynthesis
Specified out as part of a [NoInterfaceObject]
interface called SpeechSynthesisGetter
, and Implemented by the Window
object, the speechSynthesis
property provides access to the SpeechSynthesis
controller, and therefore the entry point to speech synthesis functionality.
For information on errors reported by the Speech API (for example, "language-not-supported"
and "language-unavailable"
), see the following documentation:
The Web Speech API examples on GitHub contains demos to illustrate speech recognition and synthesis.
Desktop | Mobile | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Chrome | Edge | Firefox | Internet Explorer | Opera | Safari | WebView Android | Chrome Android | Firefox for Android | Opera Android | Safari on IOS | Samsung Internet | |
Web_Speech_API |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
cancel |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
getVoices |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
pause |
33 | 14 | 49 | No | 21 | 7 | No | 33In Android,pause() ends the current utterance. pause() behaves the same as cancel() . |
62In Android,pause() ends the current utterance. pause() behaves the same as cancel() . |
No | 7 | 3.0In Android,pause() ends the current utterance. pause() behaves the same as cancel() . |
paused |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
pending |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
resume |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
speak |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
speaking |
33 | 14 | 49 | No | 21 | 7 | No | 33 | 62 | No | 7 | 3.0 |
voiceschanged_event |
33 | 14 | 49 | No | 21 | 16 | No | 33 | 62 | No | 16 | 3.0 |
Desktop | Mobile | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Chrome | Edge | Firefox | Internet Explorer | Opera | Safari | WebView Android | Chrome Android | Firefox for Android | Opera Android | Safari on IOS | Samsung Internet | |
SpeechRecognition |
33 | 79 | No | No | No | 14.1 | 37 | 33 | No | No | 14.5 | 2.0 |
Web_Speech_API |
33You'll need to serve your code through a web server for recognition to work. |
79You'll need to serve your code through a web server for recognition to work. |
No | No | 20You'll need to serve your code through a web server for recognition to work. |
14.1 | 4.4.3You'll need to serve your code through a web server for recognition to work. |
33You'll need to serve your code through a web server for recognition to work. |
No | 20You'll need to serve your code through a web server for recognition to work. |
14.5 | 2.0You'll need to serve your code through a web server for recognition to work. |
abort |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
audioend_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
audiostart_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
continuous |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
end_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
error_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
grammars |
33 | 79 | No | No | 20 | No | 4.4.3 | 33 | No | 20 | No | 2.0 |
interimResults |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
lang |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
maxAlternatives |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
nomatch_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
result_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
soundend_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
soundstart_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
speechend_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
speechstart_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
start |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
start_event |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
stop |
33 | 79 | No | No | 20 | 14.1 | 4.4.3 | 33 | No | 20 | 14.5 | 2.0 |
BCD tables only load in the browser
BCD tables only load in the browser
© 2005–2023 MDN contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API