OpenAI’s New Voice AI Can Now Hear Your Emotions and Talk Back

The Supercharged with AI
3 min readNov 5, 2024

OpenAI has announced a significant expansion of its Advanced Voice Mode for ChatGPT, bringing sophisticated voice interaction capabilities to a broader user base. This enhancement represents a major step forward in natural AI communication, offering features like mid-sentence interruption and emotional response adaptation.

Technical Features and Implementation

The new Advanced Voice Mode marks a substantial improvement over the standard voice interface previously available to paid users. The system now offers more dynamic interaction capabilities, including the ability to interrupt the AI’s responses naturally — a feature that was notably absent in the mobile app’s previous iteration. The technology can now analyze and respond to emotional cues in users’ voices, adjusting its responses accordingly.

OpenAI has introduced five new AI voices — Arbor, Maple, Sol, Spruce, and Vale — created through collaboration with professional voice actors worldwide. This development came after earlier controversy regarding the similarity of its original voice, Sky, to actress Scarlett Johansson’s voice in the movie “Her.” The company emphasizes that these new voices were carefully selected for their warmth, approachability, and ability to engage in extended conversations.

Accessibility and Safety Measures

--

--

The Supercharged with AI
The Supercharged with AI

Written by The Supercharged with AI

AI visionary Elena, founder of The Supercharged Newsletter. Mission - empower 1M+ entrepreneurs to thrive in the AI era. Join us today to exceed the limits!