Modern voice cloning uses two approaches: TTS fine-tuning (adapting a text-to-speech model on the target voice's audio) and zero-shot cloning (feeding a voice sample as a reference to a general model that extracts and applies the voice characteristics). Zero-shot is more convenient (no training needed) but slightly less accurate. Fine-tuning produces higher fidelity but requires more audio and compute. ElevenLabs and most consumer services use zero-shot approaches.
Clone quality depends on: audio quality of the reference sample (clean, noise-free audio produces much better clones), amount of reference audio (more is better, but diminishing returns after ~1 minute), diversity of speech (samples with varied intonation and emotion clone better than monotone reading), and the cloning model's capability. Current best systems are nearly indistinguishable from real speech for the reference speaker's typical speaking style, but may falter on emotions or styles not represented in the reference.
Most reputable services require consent verification for voice cloning: you must prove you have permission to clone a voice. Some use voice verification (you must say a specific phrase in your own voice). Others require written consent documentation. Watermarking of cloned audio is becoming standard to enable detection. But open-source voice cloning tools (like so-vits-svc, RVC) don't enforce consent, raising ongoing concerns about misuse.