OpenAI is rolling out Advanced Voice Mode for ChatGPT Plus and Team users, aiming to reach everyone by the end of the week.
This new feature, part of the GPT-4o model, offers a more natural, human-like conversation experience.
Also try: 5 Ways To Use The New ChatGPT Voices Feature
You can now talk to the chatbot, and it'll respond with enhanced emotional tones and even catch non-verbal cues like interruptions.
The update also brings five new voices—Arbor, Maple, Sol, Spruce, and Vale—joining the existing lineup. Advanced Voice Mode now integrates Custom Instructions and Memory, so the AI remembers your preferences across chats.
Plus, it's got improved conversational speed, better handling of accents, and smoother dialogues in foreign languages.
OpenAI used advanced neural text-to-speech technology to make these voices sound more human. They trained the models on massive amounts of real human speech to capture nuances like tone, emotion, and pacing.
The idea was to move past robotic responses and create an AI that feels like a natural conversation partner—even picking up on interruptions and non-verbal cues.
The new voices are currently available only in the U.S., but OpenAI plans to expand access to more regions and to subscribers of its Edu and Enterprise plans soon. |