ua en ru

Your ChatGPT conversations may expose your personality, researchers warn

Wed, May 06, 2026 - 11:40
2 min
AI network learns to determine personality type
Your ChatGPT conversations may expose your personality, researchers warn AI knows too much about your personality (photo: Magnific)

Researchers from the Swiss Federal Institute of Technology Zurich conducted a large-scale study to determine how deeply AI can analyze a person’s personality. The results showed that a user’s ChatGPT conversation history is sufficient to form an accurate psychological profile, according to arXiv.

In the study, 668 users participated. They provided copies of their log files (a total of around 62,000 chats) and completed a standard psychological test.

Based on this data, the authors trained an AI model to identify key personality traits:

  • Extraversion – sociability;
  • Agreeableness – ability to cooperate;
  • Conscientiousness – responsibility and discipline;
  • Neuroticism – emotional stability;
  • Openness to experience – curiosity.

The AI demonstrated the highest accuracy in determining extraversion and neuroticism.

Scientists observed a pattern: the type of topics discussed directly indicates specific traits. For example, discussions about interpersonal relationships make it easier to infer the level of extraversion, while conversations on religious topics reveal a user’s level of conscientiousness.

Will AI profile people?

The authors of the study emphasize that the possibility of automatically creating psychological profiles of millions of people carries serious risks for democratic societies.

Researchers highlight several issues:

Manipulation: companies or governments could use such data for targeted propaganda.

Cognitive capitulation: people increasingly trust AI as a therapist or mentor, which makes them vulnerable to hidden influence.

Mass surveillance: since most services are owned by private corporations, data about users’ inner lives becomes an object of commercial or political interest.

Can data be protected?

The study showed that the longer a person interacts with AI, the more accurate predictions of their behavior become. Even random and non-personalized queries contain implicit personality markers.

Scientists call on developers to implement “local filtering” tools that would remove excessive personal information from queries before they are sent to AI providers’ servers.

At present, users are advised to critically evaluate the information they entrust to algorithms and to limit the use of chatbots as personal advisors or consultants.

Or read us wherever it's convenient for you!