Expand Minimize Picture-in-picture Power Device Status Voice Recognition Skip Back Skip Forward Minus Plus Play Search
Internet Explorer alert
This browser is not recommended for use with smartdevicelink.com, and may not function properly. Upgrade to a different browser to guarantee support of all features.
close alert
To Top Created with Sketch. To Top
To Bottom Created with Sketch. To Bottom
iOS Documentation
SDLTTSChunk

SDLTTSChunk Class Reference

Section Contents

Overview

Specifies what is to be spoken. This can be simply a text phrase, which SDL will speak according to its own rules. It can also be phonemes from either the Microsoft SAPI phoneme set, or from the LHPLUS phoneme set. It can also be a pre-recorded sound in WAV format (either developer-defined, or provided by the SDL platform).

In SDL, words, and therefore sentences, can be built up from phonemes and are used to explicitly provide the proper pronunciation to the TTS engine. For example, to have SDL pronounce the word “read” as “red”, rather than as when it is pronounced like “reed”, the developer would use phonemes to express this desired pronunciation.

For more information about phonemes, see http://en.wikipedia.org/wiki/Phoneme.

@since SmartDeviceLink 1.0

-initWithText:type:

Initialize with text and a type

Objective-C

- (nonnull instancetype)initWithText:(nonnull NSString *)text
                                type:(nonnull SDLSpeechCapabilities)type;

Swift

init(text: String, type: SDLSpeechCapabilities)

Parameters

text

The string to be spoken

type

The type of text the string is

Return Value

The RPC

+textChunksFromString:

Create TTS using text

Objective-C

+ (nonnull NSArray<SDLTTSChunk *> *)textChunksFromString:
    (nonnull NSString *)string;

Swift

class func textChunks(from string: String) -> [SDLTTSChunk]

Parameters

string

The text chunk

Return Value

The RPC

+sapiChunksFromString:

Create TTS using SAPI

Objective-C

+ (nonnull NSArray<SDLTTSChunk *> *)sapiChunksFromString:
    (nonnull NSString *)string;

Swift

class func sapiChunks(from string: String) -> [SDLTTSChunk]

Parameters

string

The SAPI chunk

Return Value

The RPC

+lhPlusChunksFromString:

Create TTS using LH Plus

Objective-C

+ (nonnull NSArray<SDLTTSChunk *> *)lhPlusChunksFromString:
    (nonnull NSString *)string;

Swift

class func lhPlusChunks(from string: String) -> [SDLTTSChunk]

Parameters

string

The LH Plus chunk

Return Value

The RPC

+prerecordedChunksFromString:

Create TTS using prerecorded chunks

Objective-C

+ (nonnull NSArray<SDLTTSChunk *> *)prerecordedChunksFromString:
    (nonnull NSString *)string;

Swift

class func prerecordedChunks(from string: String) -> [SDLTTSChunk]

Parameters

string

The prerecorded chunk

Return Value

The RPC

+silenceChunks

Create TTS using silence

Objective-C

+ (nonnull NSArray<SDLTTSChunk *> *)silenceChunks;

Swift

class func silenceChunks() -> [SDLTTSChunk]

Return Value

The RPC

+fileChunksWithName:

Create “TTS” to play an audio file previously uploaded to the system.

Objective-C

+ (nonnull NSArray<SDLTTSChunk *> *)fileChunksWithName:
    (nonnull NSString *)fileName;

Swift

class func fileChunks(withName fileName: String) -> [SDLTTSChunk]

Parameters

fileName

The name of the file used in the SDLFile or PutFile that was uploaded

Return Value

The RPC

text

Text to be spoken, a phoneme specification, or the name of a pre-recorded / pre-uploaded sound. The contents of this field are indicated by the “type” field.

Required, Max length 500

Objective-C

@property (nonatomic, strong) NSString *_Nonnull text;

Swift

var text: String { get set }

type

The type of information in the “text” field (e.g. phrase to be spoken, phoneme specification, name of pre-recorded sound).

Required

Objective-C

@property (nonatomic, strong) SDLSpeechCapabilities _Nonnull type;

Swift

var type: SDLSpeechCapabilities { get set }
View on GitHub.com
Previous Section Next Section