OpenAI Research Lead for Mental Health Work to Exit

OpenAI Research Lead for Mental Health Work to Exit

A senior research leader at OpenAI, Andrea Vallone, who headed the model policy safety team responsible for shaping how ChatGPT responds to users in mental health crises, is set to leave at the end of 2025.

Vallone’s team has played a central role in improving ChatGPT’s behavior toward distressed users. Under her leadership, they consulted more than 170 mental health experts and published a report showing that updates made in GPT-5 reduced harmful or unsafe responses by 65–80% in crisis conversations.

OpenAI has confirmed the departure and said that, in the meantime, her team will report directly to Johannes Heidecke, the company’s head of safety systems, until a replacement is found.

Her exit comes amid rising scrutiny of how responsibly AI systems handle sensitive mental health interactions. OpenAI’s efforts to balance user engagement and safety are being challenged by increasing legal and public pressure.

About the author

TOOLHUNT

Effortlessly find the right tools for the job.

TOOLHUNT

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to TOOLHUNT.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.