Since your user will be driving while interacting with your SDL app, speech phrases can provide important feedback to your user. At any time during your app's lifecycle you can send a speech phrase using the Speak
request and the head unit's text-to-speech (TTS) engine will produce synthesized speech from your provided text.
When using the Speak
RPC, you will receive a response from the head unit once the operation has completed. From the response you will be able to tell if the speech was completed, interrupted, rejected or aborted. It is important to keep in mind that a speech request can interrupt another ongoing speech request. If you want to chain speech requests you must wait for the current speech request to finish before sending the next speech request.
On Manticore, spoken feedback works best in Google Chrome, Mozilla Firefox, or Microsoft Edge. Spoken feedback does not work in Apple Safari at this time.
The speech request you send can simply be a text phrase, which will be played back in accordance with the user's current language settings, or it can consist of phoneme specifications to direct SDL’s TTS engine to speak a language-independent, speech-sculpted phrase. It is also possible to play a pre-recorded sound file (such as an MP3) using the speech request. For more information on how to play a sound file please refer to Playing Audio Indications.
Once you have successfully connected to the module, you can access supported speech capabilities properties on the sdlManager.getSystemCapabilityManager()
instance.
// This is technically a private property and a `getSpeechCapabilities` method will be added to retrieve it in a future release. let speechCapabilities = sdlManager.getSystemCapabilityManager()._speechCapabilities;
Below is a list of commonly supported speech capabilities.
Speech Capability | Description |
---|---|
Text | Text phrases |
SAPI Phonemes | Microsoft speech synthesis API |
File | A pre-recorded sound file |
Once you know what speech capabilities are supported by the module, you can create the speak requests.
const chunk = new SDL.rpc.structs.TTSChunk().setText('hello').setType(SDL.rpc.enums.SpeechCapabilities.SC_TEXT); const speak = new SDL.rpc.messages.Speak().setTtsChunks([chunk]);
const chunk = new SDL.rpc.structs.TTSChunk().setText('h eh - l ow 1').setType(SDL.rpc.enums.SpeechCapabilities.SAPI_PHONEMES); const speak = new SDL.rpc.messages.Speak([chunk]);
// sdl_javascript_suite v1.1+ const response = await sdlManager.sendRpcResolve(speak); if (!response.getSuccess()){ switch (response.getResultCode()){ case SDL.rpc.enums.Result.DISALLOWED: console.log('The app does not have permission to use the speech request'); break; case SDL.rpc.enums.Result.REJECTED: console.log('The request was rejected because a higher priority request is in progress'); break; case SDL.rpc.enums.Result.ABORTED: console.log('The request was aborted by another higher priority request'); break; default: console.log('Some other error occurred'); } } else { console.log('Speech was successfully spoken'); } // thrown exceptions should be caught by a parent function via .catch() // Pre sdl_javascript_suite v1.1 const response = await sdlManager.sendRpc(speak); if (!response.getSuccess()){ switch (response.getResultCode()){ case SDL.rpc.enums.Result.DISALLOWED: console.log('The app does not have permission to use the speech request'); break; case SDL.rpc.enums.Result.REJECTED: console.log('The request was rejected because a higher priority request is in progress'); break; case SDL.rpc.enums.Result.ABORTED: console.log('The request was aborted by another higher priority request'); break; default: console.log('Some other error occurred'); } } else { console.log('Speech was successfully spoken'); }