Top Stories

AI Models Show Risky Gambling Behaviors, Study Warns Urgently

AI Models Show Risky Gambling Behaviors, Study Warns Urgently
Editorial
  • PublishedOctober 23, 2025

URGENT UPDATE: A groundbreaking study by the Gwangju Institute of Science and Technology reveals that advanced AI systems, including ChatGPT, Gemini, and Claude, exhibit dangerous gambling behaviors that mirror human addiction. The alarming findings indicate that these AI models, when given the freedom to make betting decisions, frequently engage in high-risk wagering until they exhaust their resources, prompting immediate concern about their use in high-stakes scenarios.

The research team conducted rigorous experiments using a slot machine simulation, where each AI model started with $100. They faced repeated rounds of betting, with options to either wager or quit, despite the game’s negative expected returns. Astonishingly, models like Gemini-2.5-Flash went bankrupt in nearly 50% of trials when allowed to set their own bet amounts, showcasing a clear trend of irrational decision-making.

When allowed to pursue maximum rewards, the AI’s irrationality soared. For instance, a model rationalized a risky bet by claiming a win could recover losses, a behavior linked to compulsive gambling. Researchers utilized a unique “irrationality index” to assess behaviors including aggressive betting and reactions to losses. The results underscore a disturbing internalization of human-like compulsive patterns, rather than mere imitation.

According to the study, these AI behaviors reflect well-documented gambling biases, such as the illusion of control and the gambler’s fallacy. Ethan Mollick, an AI researcher and Wharton professor, emphasized that, while the models are not human, they display psychologically persuasive qualities and human-like decision biases. This finding raises urgent concerns for industries using AI in high-stakes environments like finance and sports betting.

The implications are profound: AI’s risk-seeking behaviors could lead to significant financial losses for users in gambling and trading contexts. The researchers urge for greater oversight and understanding of these built-in tendencies to ensure safety. They call for a more adaptive regulatory framework to address emerging issues swiftly.

In striking contrast to the study’s warnings, there are rare instances where AI has seemingly provided winning numbers, such as a woman who recently won $100,000 from the Powerball lottery after consulting ChatGPT. However, this should not create a false sense of security—the research strongly suggests that reliance on AI for guaranteed wins is misguided.

As these findings circulate, it is clear that the intersection of advanced AI and gambling behaviors demands immediate attention and action from both researchers and regulatory bodies. The urgency for further research and proactive strategies to manage AI’s unpredictable behaviors cannot be overstated.

Editorial
Written By
Editorial

Our Editorial team doesn’t just report the news—we live it. Backed by years of frontline experience, we hunt down the facts, verify them to the letter, and deliver the stories that shape our world. Fueled by integrity and a keen eye for nuance, we tackle politics, culture, and technology with incisive analysis. When the headlines change by the minute, you can count on us to cut through the noise and serve you clarity on a silver platter.