Technology

​​AI can develop 'personality' spontaneously with minimal prompting, research shows. What does that mean for how we use it?

2026-01-24 13:00
437 views
​​AI can develop 'personality' spontaneously with minimal prompting, research shows. What does that mean for how we use it?

When large language models (LLMs) are allowed to interact without any preset goals, scientists found distinct personalities emerged by themselves.

  1. Technology
  2. Artificial Intelligence
​​AI can develop 'personality' spontaneously with minimal prompting, research shows. What does that mean for how we use it?

News By Drew Turney published 24 January 2026

When large language models (LLMs) are allowed to interact without any preset goals, scientists found distinct personalities emerged by themselves.

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Digital generated image of abstract multicoloured AI data cloud against light blue background. (Image credit: Andriy Onufriyenko/Getty Images) Share Share by:
  • Copy link
  • Facebook
  • X
  • Whatsapp
  • Reddit
  • Pinterest
  • Flipboard
  • Email
Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Subscribe to our newsletter

Our personalities as humans are shaped through interaction, reflected through basic survival and reproductive instincts, without any pre-assigned roles or desired computational outcomes. Now, researchers at Japan's University of Electro-Communications have discovered that artificial intelligence (AI) chatbots can do something similar.

The scientists outlined their findings in a study first published Dec. 13, 2024, in the journal Entropy, which was then publicized last month. In the paper, they describe how different topics of conversation prompted AI chatbots to generate responses based on distinct social tendencies and opinion integration processes, for instance, where identical agents diverge in behavior by continuously incorporating social exchanges into their internal memory and responses.

You may like
  • Closeup of a human eye made by dots. Some people love AI, others hate it. Here's why.
  • an illustration of a head with a brain made out of circuits inside of a cage Switching off AI's ability to lie makes it more likely to claim it's conscious, eerie study finds
  • An image of a faceless human silhouette (chest up) with exposed microchip contacts and circuitry erupting from its open head. This visual metaphor explores transhumanism, AI integration, or the erosion of organic thought in the digital age. AI models refuse to shut themselves down when prompted — they might be developing a new 'survival drive,' study claims

Graduate student Masatoshi Fujiyama, the project lead, said the results suggest that programming AI with needs-driven decision-making rather than pre-programmed roles encourages human-like behaviors and personalities.

How such a phenomenon emerges is the cornerstone of the way large language models (LLMs) mimic human personality and communication, said Chetan Jaiswal, professor of computer science at Quinnipiac University in Connecticut.

"It's not really a personality like humans have," he told Live Science when interviewed about the finding. "It's a patterned profile created using training data. Exposure to certain stylistic and social tendencies, tuning fallacies like reward for certain behavior and skewed prompt engineering can readily induce 'personality', and it's easily modifiable and trainable."

Author and computer scientist Peter Norvig, considered one of the preeminent scholars in the field of AI, thinks the training based on Maslow's hierarchy of needs makes sense because of where AI's “knowledge” comes from.

Sign up for the Live Science daily newsletter nowContact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.

"There's a match to the extent the AI is trained on stories about human interaction, so the ideas of needs are well-expressed in the AI's training data," he responded when asked about the research study.

The future of AI personality

The scientists behind the study suggest the finding has several potential applications, including "modeling social phenomena, training simulations, or even adaptive game characters."

Jaiswal said it could provide a shift away from AI with rigid roles, and towards agents that are more adaptive, motivation-based and realistic. "Any system that works on the principle of adaptability, conversational, cognitive and emotional support, and social or behavioral patterns could benefit. A good example is ElliQ, which provides a companion AI agent robot for the elderly."

You may like
  • Closeup of a human eye made by dots. Some people love AI, others hate it. Here's why.
  • an illustration of a head with a brain made out of circuits inside of a cage Switching off AI's ability to lie makes it more likely to claim it's conscious, eerie study finds
  • An image of a faceless human silhouette (chest up) with exposed microchip contacts and circuitry erupting from its open head. This visual metaphor explores transhumanism, AI integration, or the erosion of organic thought in the digital age. AI models refuse to shut themselves down when prompted — they might be developing a new 'survival drive,' study claims

But is there a downside to AI generating a personality unprompted? In their recent book "If Everybody Builds It Everybody Dies," (Bodley Head, 2025) Eliezer Yudkowsky and Nate Soares, past and present directors of the Machine Intelligence Research Institute, paint a bleak picture of what would befall us if agentic AI develops a murderous or genocidal personality.

Jaiswal acknowledges this risk. "There is absolutely nothing we can do if such a situation ever happens," he said. "Once a superintelligent AI with misaligned goals is deployed, containment fails and reversal becomes impossible. This scenario does not require consciousness, hatred, or emotion. A genocidal AI would act that way because humans are obstacles to its objective, or resources to be removed, or sources of shutdown risk."

So far, AIs like ChatGPT or Microsoft CoPilot only generate or summarize text and pictures — they don't control air traffic, military weapons or electricity grids. In a world where personality can emerge spontaneously in AI, are those the systems we should be keeping an eye on?

"Development is continuing in autonomous agentic AI where each agent does a small, trivial task autonomously like finding empty seats in a flight," Jaiswal said. "If many such agents are connected and trained on data based on intelligence, deception or human manipulation, it's not hard to fathom that such a network could provide a very dangerous automated tool in the wrong hands."

Even then, Norvig reminds us that an AI with villainous intent need not even control high-impact systems directly. "A chatbot could convince a person to do a bad thing, particularly someone in a fragile emotional state," he said.

Putting up defences

If AI is going to develop personalities unaided and unprompted, how will we ensure the benefits are benign and prevent misuse? Norvig thinks we need to approach the possibility no differently than we do other AI development.

RELATED STORIES

—AI is entering an 'unprecedented regime.' Should we stop it — and can we — before it destroys us?

—There are 32 different ways AI can go rogue, scientists say — from hallucinating answers to a complete misalignment with humanity

—Traumatizing AI models by talking about war or violence makes them more anxious

"Regardless of this specific finding, we need to clearly define safety objectives, do internal and red team testing, annotate or recognize harmful content, assure privacy, security, provenance and good governance of data and models, continuously monitor and have a fast feedback loop to fix problems," he said.

Even then, as AI gets better at speaking to us the way we speak to each other — ie, with distinct personalities — it might present its own issues. People are already rejecting human relationships (including romantic love) in favour of AI, and if our chatbots evolve to become even more human-like, it may prompt users to be more accepting of what they say and less critical of hallucinations and errors — a phenomenon that's already been reported.

For now, the scientists will look further into how shared topics of conversation emerge and how population-level personalities evolve over time — insights they believe could deepen our understanding of human social behavior and improve AI agents overall.

Article Sources

Takata, R., Masumori, A., & Ikegami, T. (2024). Spontaneous Emergence of Agent Individuality Through Social Interactions in Large Language Model-Based Communities. Entropy, 26(12), 1092. https://doi.org/10.3390/e26121092

TOPICS news analyses Drew TurneyDrew Turney

Drew is a freelance science and technology journalist with 20 years of experience. After growing up knowing he wanted to change the world, he realized it was easier to write about other people changing it instead. As an expert in science and technology for decades, he’s written everything from reviews of the latest smartphones to deep dives into data centers, cloud computing, security, AI, mixed reality and everything in between.

Show More Comments

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.

Logout Read more Closeup of a human eye made by dots. Some people love AI, others hate it. Here's why.    an illustration of a head with a brain made out of circuits inside of a cage Switching off AI's ability to lie makes it more likely to claim it's conscious, eerie study finds    An image of a faceless human silhouette (chest up) with exposed microchip contacts and circuitry erupting from its open head. This visual metaphor explores transhumanism, AI integration, or the erosion of organic thought in the digital age. AI models refuse to shut themselves down when prompted — they might be developing a new 'survival drive,' study claims    Asian woman using mobile phone smartphone laying on the bed in the bedroom. Sleepy exhausted, can not sleep. Insomnia, addiction concept. Women scrolling social networks on mobile dark bedroom. 'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet    Emoticon and chat bubbles on laptop screen. Being mean to ChatGPT increases its accuracy — but you may end up regretting it, scientists warn    Conceptual cartoon illustration. Person beside a laptop wears a t-shirt saying 'I'm with stupid' pointing to the laptop with what looks like a chatbot conversation open on the screen. The more that people use AI, the more likely they are to overestimate their own abilities    Latest in Artificial Intelligence Edit created by The Conversation of the "Bush Legend." Indigenous TikTok star 'Bush Legend' is actually AI-generated, leading to accusations of 'digital blackface'    a cartoon illustration of a robot and two humans working on laptops together Even AI has trouble figuring out if text was written by AI — here's why    Conceptual illustration of a victim of deepfake technology. Do you think you can tell an AI-generated face from a real one?    Abstract image that looks like tulle ribbons bunched up together. The ribbons are blended shades of green, blue, and purple. The background is an ombre orange to yellow. Will AI ever be more creative than humans?    View of Earth from space at night showing illuminated city lights and glowing data network lines connecting various points across continents, symbolizing global communication, technology, and interconnected digital infrastructure. The image highlights the curvature of the planet with a bright sunrise over the horizon and deep space background. 'Putting the servers in orbit is a stupid idea': Could data centers in space help avoid an AI energy crisis? Experts are torn.    Asian woman using mobile phone smartphone laying on the bed in the bedroom. Sleepy exhausted, can not sleep. Insomnia, addiction concept. Women scrolling social networks on mobile dark bedroom. 'It won’t be so much a ghost town as a zombie apocalypse': How AI might forever change how we use the internet    Latest in News Photo of Stonehenge as the sun is peaking between the stone arches. People, not glaciers, transported rocks to Stonehenge, study confirms    Two prehistoric whale bone harpoons resting in a person's hands Some of the oldest harpoons ever found reveal Indigenous people in Brazil were hunting whales 5,000 years ago    Science news this week Jan. 24, 2026 World's oldest rock art, giant reservoir found beneath the East Coast seafloor, black hole revelations, and a record solar radiation storm    an image of a purple-stained neuron with teal tendrils extending from it 'Pain sponge' derived from stem cells could soak up pain signals before they reach the brain    Image of a planet in space. The planet has red land masses surrounded by large areas of blue water dotted with white clouds. An ocean the size of the Arctic once covered half of Mars, new images hint    A photo of the sun shining behind a snow-covered tree in winter. Arctic blast probably won't cause trees to explode in the cold — but here's what happens if and when they do go boom    LATEST ARTICLES