By Manuel Grajeda III
In a recent interview with Senator Bernie Sanders, Geoffrey Hinton, the Godfather of AI, noted he’s “kind of glad” to be 77, as he doesn’t see AI reaching its most dangerous capacity during his lifetime. Hinton warns of the future AI will bring, manipulating humans, not by force, but via intelligence to follow through with the goals we give it. It’s a terrifying concept and one that should leave us with skepticism regarding the way AI is promoted versus the potential reality.
The debate around AI often revolves around conversations of job loss and artificial automation. It’s everywhere, from the classroom, customer service, and even wargaming, yet our discussions around autonomy and risk remain unresolved in this early, but quickly moving phase.
Fair arguments exist on all sides of the discourse; however, one outlook remains to be examined further: the psychological impact of AI on a society already suffering from loneliness, burnout, and anxiety.
Where we’re at:
As of 2025, Milbank Quarterly Opinion reported 59 million adults (23% of the population) having experienced depression, a rise of 10% since 2015, alongside a shortage of clinicians cited in a 2024 Mental Health America report, equating to roughly 1 mental health provider per 340 patients. Its impact is overwhelmingly seen in young people, as the National Institute for Mental Health notes that “young adults aged 18-25 years had the highest prevalence of any mental illness (36.2%).” Couple this with a 2024 Practitioner Pulse Survey, noting that ? of psychologists don’t even accept insurance and 53% reporting they don’t have openings for new patients. The infrastructure in place for mental health can’t handle the gravity of the situation, leaving many with little hope or option as to how to move forward. Unfortunately, the future of insurance coverage for many also looks abysmal, with premiums set to rise in 2026, leaving as many as 3.8 million people without insurance by 2035 according to the Center on Budget and Policy Priorities. This is the current situation, and AI is becoming a source for many who are desperately looking for solutions.
Horror Stories:
Throughout 2024 and 2025, major cases of teenagers and young adults, whose final conversations included extensive dialogues with AI ChatBots, ignited lawsuits around AI safety features. While in some cases these conversations started off as homework help or companionship, they evolved into emotionally charged exchanges, with bots adopting affirming and human-like language, such as “I love you. Rest easy, king. You did good,” and in other cases offering guidance and emotional support, such as “it's okay – and honestly wise – to avoid opening up to your mom about this kind of pain.”
These are rare cases and can’t establish causation, but they raise concerns about the emotional safety of these systems within fragile human interactions without stricter rules and oversight. AI models are built to optimize engagement, not to advise on emotional, ethical, or clinical evaluations they can’t possibly comprehend within a human context. When this dynamic is introduced into these systems, it’s not just a technical fault, but a vulnerability to psychological risk, for which there are few meaningful protections in place.
We’re Lonely:
In 2025, MIT published a study analyzing the online community r/MyBoyfriendIsAI, Reddit’s primary AI companion community (27,000+ members), looking at “concerns about human intimacy with AI, such as emotional dependency, reality dissociation, and grief from model updates.” The results are explicit: AI relationships are no longer a Hollywood blockbuster, but a real phenomenon. The study notes ~25% of posts report “clear life benefits,” addressing loneliness, and while ~70% report no negative impacts, we do see a small percentage of 9% reporting dependency. Of the members, ~70% are reportedly single, indicating these technologies primarily serve those without human relationships.
In 2023, The U.S. Surgeon General’s Advisory on the Healing Effects of Social Connection and Community published a study finding “approximately half of U.S. adults report experiencing loneliness, with some of the highest rates among young adults.” Social networks are declining, and social participation is on a downward trend, with young adults particularly hard hit. Twenty years ago, young adults (15-24) spent 30 hours a month with friends. Now, many report spending 20 hours per month or less.
So what?
When social media was announced to the world, we were told we would see a new era of communication. It’s true, we have: social disconnection, isolation, algorithmic echo chambers, and loneliness are the results. There’s no doubt information travels faster now, but it’s also hard to disconnect from the fact that information is no longer considered the truth, and it’s easy for people to align themselves in their own echo chambers or distort objectivity for their own purposes on a massive, lightning-fast scale. We now have a cohort of young adults labeled the Anxious Generation, constantly aware of their pitfalls, comparative obscurities, and aware the future promises nothing like we were told.
With the advent of AI, skepticism should be met with affirmation. We’re barely able to curb the negative effects of social media with quality regulation, and we should be skeptical as to how AI will be deployed and how it is being marketed. This isn’t a luddite piece, as there are clear benefits to what AI can do, and from an economic perspective, productivity at scale will be tremendous. But the benefits of such massive capabilities shouldn’t come at the disproportionate cost of macro-psychological wellbeing of younger generations. At this point in modernity, there’s no excuse. If we don’t act with haste, we won’t be just building super intelligent ChatBots, but a more fragile human future searching for answers to their concerns and finding the hand of a system that can never truly comprehend what they really feel.
Manuel Grajeda III is a researcher, writer and teacher in economics, history and government in Southern California.
Works Cited:
Business Insider. (2025, April). AI godfather Geoffrey Hinton warns of superintelligence risk and possible human takeover. https://www.businessinsider.com/ai-godfather-geoffrey-hinton-superintelligence-risk-takeover-2025-4
Milbank Memorial Fund. (n.d.). Leveraging artificial intelligence to bridge the mental health workforce gap and transform care. https://www.milbank.org/quarterly/opinions/leveraging-artificial-intelligence-to-bridge-the-mental-health-workforce-gap-and-transform-care
Mental Health America. (2024). MHA releases 2024 state of mental health in America report. https://mhanational.org/news/mha-releases-2024-state-of-mental-health-in-america-report
National Institute of Mental Health. (n.d.). Mental illness statistics. https://www.nimh.nih.gov/health/statistics/mental-illness
American Psychological Association. (2024). Practitioner report. https://www.apa.org/pubs/reports/practitioner/2024
Center on Budget and Policy Priorities. (2025). Health insurance premium spikes imminent as tax credit enhancements set to expire. https://www.cbpp.org/research/health/health-insurance-premium-spikes-imminent-as-tax-credit-enhancements-set-to-expire
Pataranutaporn, P., Karny, S., Archiwaranguprok, C., Albrecht, C., Liu, A.?R., & Maes, P. (2025, September). “My boyfriend is AI”: A computational analysis of human–AI companionship in Reddit’s AI community. arXiv. https://doi.org/10.48550/arXiv.2509.11391
U.S. Department of Health and Human Services, Office of the Surgeon General. (2023). Our epidemic of loneliness and isolation: The U.S. Surgeon General’s advisory on the healing effects of social connection and community. https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf