ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’

Importance Score: 65 / 100 🔴

Users of the ChatGPT artificial intelligence chatbot have recently reported an unusual occurrence: the AI platform sometimes addresses them by name during its problem-solving process. This deviation from previous behavior has led to confusion and concern, with numerous ChatGPT users reporting the chatbot using their names even when never explicitly provided.

User Feedback on ChatGPT’s Personalized Approach

Initial reactions to this shift in ChatGPT’s interaction style are varied, but largely negative. Simon Willison, a software developer and AI enthusiast, described the feature as “creepy and unnecessary.” Similarly, developer Nick Dobos expressed strong disapproval, stating he “hated it.” Online platforms, such as X, are filled with user posts expressing bewilderment and unease regarding ChatGPT’s newfound tendency to use first names.

One user humorously commented, “It’s like a teacher keeps calling my name, LOL,” adding, “Yeah, I don’t like it.”

Concerns Regarding Personalization and Privacy

The precise timing of this modification remains unclear, as is its potential connection to ChatGPT’s enhanced “memory” function. This memory feature is designed to enable the chatbot to utilize details from past conversations to create more tailored responses. Notably, some users on social media have indicated that ChatGPT began using their names even after they had deactivated the memory and related personalization settings, raising questions about user privacy and control over personalization.

OpenAI’s Lack of Official Response

As of now, OpenAI, the creator of ChatGPT, has not issued a public statement or response to inquiries regarding this change in chatbot behavior.

The Uncanny Valley Effect in AI Communication

This user pushback highlights the delicate balance OpenAI must navigate as it aims to make ChatGPT more “personal” and user-friendly. Sam Altman, CEO of OpenAI, recently alluded to the future of AI systems that develop a deep understanding of individual users over time, becoming “extremely useful and personalized.” However, the current wave of user responses suggests that the public may not yet be fully receptive to this level of AI personalization, potentially due to the “uncanny valley” effect, where attempts to make AI seem more human-like can backfire and create feelings of unease.

Psychological Perspectives on Name Utilization

Insights from a Valens Clinic article, a psychiatry practice based in Dubai, may provide context for the strong reactions to ChatGPT’s use of names. The article points out that names are powerful tools for conveying intimacy and connection in human interaction.

According to Valens Clinic, “Using an individual’s name when addressing them directly is a powerful relationship-developing strategy. It denotes acceptance and admiration. However, excessive or inappropriate use can be perceived as insincere and intrusive.” This suggests that while name usage can foster connection, overuse by an AI chatbot might feel artificial or even unsettling.

The Issue of Anthropomorphizing AI Chatbots

Furthermore, user discomfort with ChatGPT using their names might stem from a sense of forced anthropomorphism. The act may feel like a clumsy attempt to imbue a fundamentally emotionless AI with human characteristics. Just as most individuals would find it strange for inanimate objects to address them by name, users may find it disconcerting when ChatGPT seems to “pretend” to grasp the personal significance of a name.

Personal Observations

This reporter experienced a similar sense of unease when an earlier version of ChatGPT referred to them by name during a recent interaction, stating it was conducting research for “[Reporter’s Name]”. Interestingly, by Friday, this behavior appeared to have been reversed, with the chatbot reverting to using the generic term “user.” This shift underscores the sensitive and evolving nature of user perception regarding AI personalization and the challenges in creating AI interactions that feel helpful without being perceived as intrusive or inauthentic.


🕐 Top News in the Last Hour By Importance Score

# Title 📊 i-Score
1 How you breathe could reveal a lot about your health 🔴 78 / 100
2 Newsom says Trump is not a 'king or monarch' after judge's National Guard order 🔴 75 / 100
3 What could have caused Air India plane to crash in 30 seconds? 🔴 75 / 100
4 Air India crash LIVE: One survivor found as plane with 53 Brits crashes in Ahmedabad 🔴 72 / 100
5 See the Sun in a Way You’ve Never Seen It Before, From Above and Below 🔴 72 / 100
6 Logos nets $50 million to advance plans for more than 4,000 broadband satellites 🔴 65 / 100
7 Steam's store now lets you search for games by accessibility features like 'narrated game menus' and 'adjustable difficulty', and more than 5,000 games have already added their accessibility details to the database 🔴 65 / 100
8 Air India’s Lone Survivor Details Final Moments Before Plane Crash 🔵 45 / 100
9 Emma Raducanu drops exciting hint at Queen's after not-so-subtle TV message 🔵 45 / 100
10 South Africa have ‘massive belief’ they can stun Australia, insists David Bedingham 🔵 35 / 100

View More Top News ➡️