Modern TTS typically works in two stages: a text-to-spectrogram model (converting text to a visual representation of audio frequencies) and a vocoder (converting the spectrogram to actual audio waveforms). Some newer approaches are end-to-end, directly generating audio tokens from text using Transformer-based architectures similar to LLMs but operating on audio tokens instead of text tokens.
Voice cloning creates a synthetic version of a specific person's voice from a short audio sample (sometimes as little as 15 seconds). This enables personalization, dubbing, and preserving voices of people who have lost the ability to speak. It also creates obvious risks: impersonation, fraud, and non-consensual voice replication. Most providers implement consent verification and watermarking to mitigate misuse.
For conversational AI, TTS latency matters as much as quality. A user asking a voice assistant a question expects a response within 1–2 seconds. Full TTS generation can take longer, so streaming TTS (generating and playing audio in chunks as the LLM produces text) is essential. The pipeline — STT + LLM + TTS — must stay under ~2 seconds total for natural conversation, which constrains model sizes and infrastructure choices.