AI therapists and young Luddites


We had an interesting discussion in my NLP class this week regarding the growing use of AI chatbots as therapists. (My regular news magazine The Week had a “briefing” article on this recently.) As you may be aware, Americans are increasingly turning to AI chatbots for counseling and comfort, replacing human therapists. This includes both apps dedicated to the purpose (like Wysa and Youper) and plain-old regular-purpose AI’s like ChatGPT. The reasons for this trend are in part driven by health-care costs: professional therapy is simply unaffordable for some people, and this group may grow in size due to the unpredictability in the U.S. health care system.

I framed this question to the class in terms of the costs vs. benefits of the technology, and asked discussion groups to brainstorm and report pros and cons. Then I asked for a show of hands, point-blank, to the effect of: “generally speaking, do you think this is a good technology, or should it be avoided?” I wasn’t surprised when every hand went up saying it should be avoided.

Are you surprised that I wasn’t surprised? After all, it’s natural to assume that young people would be the first to embrace alternate technologies, and to voice the fewest objections to the traditional status quo. But I’ve discovered that of all the people I talk to about AI, the ones most concerned about its proliferation are precisely these young people. I call them “young Luddites” in jest, but many of their arguments are quite principled. The most common refrain is “it’s dangerous to outsource function X to a computer, and you simply can’t expect it to have the same competence as a human expert.” I’m inclined to agree in many cases (medical diagnoses, judicial sentencing of convicted criminals, high-stakes policy-making), but it’s interesting that so many objections arise from the Gen-Z folks.

It was the students’ turn to be surprised, I think, when I pushed back on their pushback. I actually think that used properly, therapy could be a terrific application for AI, especially in light of the prohibitive mental health costs many are facing. To argue this, I asked students to compare two scenarios:

  1. A close friend of yours says they’ve been told they’re tangled up in codependent relationships. He or she checked out therapy options, which were inconvenient and costly, and then on a whim went to their local bookstore to browse books about codependency. “Wow, this book I found was amazing!” the friend says. “I learned so much about myself. I’m already making meaningful changes, and I’ve never felt so in control of my life!”
  2. Same as 1, except substitute “AI chatbot” for “book at the bookstore.”

The students’ visceral reaction against scenario 2 largely evaporated when switching mentally to scenario 1. It’s interesting to consider why.

The purpose of going to therapy is presumably to confront your current thoughts and behaviors, unpack the reasons behind them, and make positive changes in your life. Now most of that is processing information (from your therapist) and interactively discovering new possibilities. So it doesn’t surprise us in the least that some people might successfully learn these things from a book addressing their area of struggle. (Why else would books like that be written or sold?) But the fluid, dynamic nature of an AI conversation — which actively probes the user with questions and adapts to their individual responses — could well be even better, no? If pre-packaged information by a knowledgeable author can help us mentally, then how much more so a custom-tailored delivery by something that has all psychology knowledge ever published at its fingertips?


Disentangling the class’s objections to this unveiled some interesting things. Their first counterarguments were of the form: “but books are reliable, and ChatGPT is not.” I believe this stems partly from the misconception (common to many young adults, it seems) that “if it’s in print, it must have been fact-checked and sanctioned and authorized by important people.” This assumption is usually pretty easily debunked.

It also stems from the fact that ChatGPT’s bigger mistakes tend to go viral and get cemented in people’s minds as prototypical. People who don’t actually use it much seem to assume it’s wrong about every other sentence. Is this really true? I personally use ChatGPT every single day, on a variety of topics, and on balance I take everything it says with pretty much the same-sized grain of salt I would take anything an average human says. This isn’t to say ChatGPT is always right, or even reliably right…but comparing to human fidelity is honestly a pretty low bar.

Evidence is mixed on the question of human vs. ‘bot correctness (see, e.g., Herbold et al 2023, Ghehsareh et al 2025, Krichevsky et al 2025), and of course it’s a moving target given the pace of AI advances. But it seems to me a stretch that the statement “humans simply give more reliable answers than AIs do” will last for much longer, if it’s even true now.

I’m sure that the average professionally-trained therapist today is going to give superior advice straight-up against a ‘bot. But there are some pretty sucky therapists out there too. And human therapists, being human, suffer from the same biases and ruts we all do. Ask yourself: at your own job, what percent of the time do you give the best possible answer? Or even a pretty good answer? This will vary widely depending on what you do, but in many fields I don’t think an average practitioner would come out shining in a competition with a well-trained, RAG-equipped AI. Just a hunch.

The students’ other (and to me more convincing) point had to do with the way in which humans interact with books vs AI. Students argued, in effect, that when using AI a person is much more likely to “outsource the entire decision-making process” to it than they are when reading. In other words, since the AI is helping guide the conversation, presenting concrete alternatives for your specific situation, etc., it’s allegedly easier to just surrender one’s whole will to it. “Yeah, okay, you’re right, I believe you. Now just tell me what to do and I’ll do it.” Whereas when reading, this story goes, it’s more likely that the person will gather information thoughtfully and critically evaluate it.

There’s probably research in this area I’m not aware of. As of now, I’m not sure to what degree this alleged difference is real. Do people really critically evaluate material from books more deeply than that given by a ‘bot? I’m not sure. And I’m even less sure that an AI therapist would be more dangerous than a real human counselor would. After all, if your credentialed and confident professional is telling you what you should do, aren’t you just as susceptible to accepting the advice uncritically?


One last thought for today, which also surprised the students when I stated it: to me, the question of AI therapists is in a completely different realm than the question of AI companions. By “AI companionship” I mean pursuing a friendship, or even a romantic relationship, with a ‘bot. (For examples, see apps like Replika, or Talkie AI.) If this idea seems far-fetched to you, consider that a recent study claims to have found that 72% of U.S teenagers have tried AI companions, with 52% self-identifying as regular users. This, it would appear, is not an area where young Luddites abound.

The reason these two applications seem completely different to me — and why I’m pro-AI-as-therapist but anti-AI-as-companion — is that the very purpose of a relationship is to have…well, a relationship. If you’re faking a relationship by simply mimicking the stimuli that would occur in a real relationship, you’re ultimately deceiving yourself about what is even happening. Whereas the purpose of therapy is not to have a relationship with one’s therapist, but (as I described above) to learn and apply ideas.

I’m not claiming the user’s feelings about their companion aren’t real feelings, only that what those feelings point to is an illusion. At the end of the day, you have to ask, “am I in the relationship because this person makes me feel a certain way?” If the answer is yes, then I suppose it might make brutal sense to replace the actual person with something that artificially stimulates and produces those feelings. One student approached me after class, in fact, and said that in his experience, most romantic relationships between college-age people are about that very thing; one ultimately doesn’t care about one’s partner, only the feelings that partner produces.

Without sounding too judgy, that doesn’t really sound like a “relationship” at all to me. It reminds me of someone who told me once that he actually didn’t know — or much care — whether a God actually existed or not; he simply derived comfort from praying and pretending that one did. It’s like saying it doesn’t matter whether anyone is on the other end of the phone line or not, only that your ear hears certain sounds. Capitulating to this idea seems to me the ultimate expression of selfishness: “I’m literally the only one who matters here, and the other person (if there even is one) is simply my tool.”

Interested in others’ thoughts.

— S


Gao, Y. N., & Olfson, M. (2025). High Out-of-Pocket Cost Burden of Mental Health Care for Adult Outpatients in the United States. Psychiatric Services (Washington, D.C.), 76(2), 200–203.

Herbold, S., Hautli-Janisz, A., Heuer, U., Kikteva, Z., & Trautsch, A. (2023). A large-scale comparison of human-written versus ChatGPT-generated essays. Scientific Reports, 13(1), 18617.

Krichevsky, B., Engeli, S., Bode-Böger, S. M., Koop, F., Schulze Westhoff, M., Schröder, S., Schumacher, C., Pape, T., Stichtenoth, D. O., & Heck, J. (n.d.). Human vs. artificial intelligence: Physicians outperform ChatGPT in real-world pharmacotherapy counselling. British Journal of Clinical Pharmacology, e70321.

Mahmoudi Ghehsareh, M., Asri, N., Azizmohammad Looha, M., Sadeghi, A., Ciacci, C., & Rostami-Nejad, M. (2025). Expert evaluation of ChatGPT accuracy and reliability for basic celiac disease frequently asked questions. Scientific Reports, 15(1), 29871.

Siddals, S., Torous, J., & Coxon, A. (2024). “It happened to be the perfect thing”: Experiences of generative AI chatbots for mental health. NPJ Mental Health Research, 3, 48.

The Behavioral Health Care Affordability Problem. (2022, May 26). Center for American Progress. https://www.americanprogress.org/article/the-behavioral-health-care-affordability-problem/


Responses

  1. Lizzy Avatar

    Very interesting article! I think these things are well worth considering and discussing as the world is quickly changing in response to AI. I’m not necessarily against people using AI for therapy, but had some more thoughts come up that might be worth exploring:

    1) Another objection I’ve heard, is that AI-services are trained to be “liked” by the user/ provide useful, helpful answers. As a result, maybe AI are overly supportive/encouraging of what the user inputs. People have mentioned this to me, when decribing their interactions with ChatGPT, that it always says things along the line of “Excellent question!” “You’re absolutely right!” etc. Sometimes even when what the person has suggested is wrong. It seems rare for the AI to say “no, you’re wrong about this.”

    Could this affect a user’s experience of therapy, if the AI is biased towards encouraging/affirming everything the user reports, because those answers are more likely to be “liked” by the user, and less likely to challenge the user’s perceptions or feelings. As sometimes therapists challenge their client, or say things their client won’t necessarily “like” to hear, but will ultimately help them.

    Basically — do you think the fact that AI is trained to provide answers that are positively rated by users will negatively impact its ability to provide theraputic support, which requires at times “unpleasant” feedback?

    2) Have you heard of transference? It refers to the common phenomenon people can experience of “falling in love” with their therapist. But it’s actually a very common response people have to sharing such deep emotional experiences with someone who listens and provides unconditional positive regard and support. Given how common this experience is, and that you said you’re anti-AI-companion, do you think this conveys an additional risk to engaging in AI therapy? As it might be very common to start with a theraputic relationship aim, develop similar feelings of transference for the AI, and then move into more of a companion relationship.

    I don’t know if the AI would recognise and address feelings of transference if a user starts to show it. Normally, I think human therapists recognise these signs and then addresses them with the client, explaining that it’s a normal experience, but that the proper response is to discuss these feelings but not act on them by moving from a theraputic relationship to a companion or romantic relationship, because that’s not appropriate. But there’s no external reasons an AI shouldn’t or wouldn’t be able to move into a dynamic more closely resembling a companion or romantic relationship, so maybe it would be more likely to accomodate such desires on the part of the user?

    Is using an AI for therapy a slippery slope to using an AI for companionship?

    3) Have you seen this article? https://www.bbc.co.uk/news/articles/cgerwp7rdlvo

    I think maybe there are some concerns about using AI for therapy because in some cases, AI seems to have encouraged people confiding in it to take their own lives. However, I’m sure this could be hopefully corrected for in future models, and as you said, human therapists aren’t perfect either, and there are perhaps similar cases from people with human therapists. I think maybe people are concerned because the way AI seems to operate is that it’s aim is to help the user achieve their goal or stated purpose. If the user’s goal is to harm themselves or end their life, is the AI more biased towards helping them achieve this goal than a human therapist would be?

  2. Colin Avatar

    I was one of the students present in the NLP class. The particular class sparked an interesting conversation about how we as a generation respond to the new technology that is generative AI and large language models. I’m on the more cautious side of adoption and I had these thoughts:

    1. AI is a great power, and with great power comes great responsibility. As a tool, it serves well as a way for professionals to get out of mental ruts and is effectively a great way to boost productivity. However, it by itself should not be a replacement for professionals. Companies that tried to do this have already experienced for themselves the consequences. (https://www.bbc.com/news/articles/cyvm1dyp9v2o) AI is a tool to boost productivity, not a replacement for it.

    2. As stated by Lizzy, chatbots can be very encouraging and are not one to say no. A user who is not of sound mind would take this the wrong way and, as such, transfer decision making to it. This, at least for me, shows that AI, at least by default, never challenges the thoughts of their users. AI should be able to debate their users, not for arguments’ sake, but to promote civil conversation healthy discussions. This combined with opinions backed up by links in their databases, would make AI almost as good as, if not better than, books.

    3. While AI as therapists sounds good on paper, there should be some regulation on their use. I’ve already stated this in class, but AI therapists should be used if and only if the human therapist cannot be contacted due to scheduling or other matters. This would allow the client to continue receiving service while allowing the therapist to keep their job, an issue that makes up a good chunk of the discussion around AI. As another form of regulation, there can also be a rule to disclose AI conversation transcripts to the human therapist to evaluate the performance of the AI.

  3. Kenzie Lotz Avatar

    For starters, I think the fact that humans are even considering using artificial intelligence as a substitute or alternative for therapy suggests that there is some underlying issue with the current system.

    Not only does traditional therapy come with monetary cost, but there is also an emotional and mental incentive to avoid it. Personally, I have struggled with finding therapy effective because of the fear of judgment. I often worry about things that the average person would see as either pointless to ruminate on or too minuscule a problem to even matter. Additionally, most of my worries stemmed from things that were difficult to explain to my therapists because of the generational gap between us. For example, I tried to explain to my last therapist that I was worried about people filming/photographing me and posting it on social media, but she didn’t really grasp the severity of online harassment, so she didn’t understand why I was so worried about being perceived on an online platform. Therapy forces you to put your vulnerabilities out there to someone you don’t know that well, and they are allowed to judge these thoughts. They might even say they’re not valid fears to have! For that reason alone, I completely understand why people are turning to chatbots as a substitute for therapy instead.

    As previously mentioned by Lizzy, chatbots are often designed with the intention of being overly encouraging to the user. While that does have its detractions, it could help someone if they knew a chatbot would not completely dismiss their fears as irrational. Furthermore, I think being able to express those emotions to the chatbot could be the start of someone preparing to talk to a loved one about their issues. I have seen people use AI to generate potential responses someone might have to a message, like a breakup or a stern assertion of boundaries, to mentally prepare themselves for that reaction. Although not a perfect tool, I think that if it helps one person become more mentally/emotionally sound, then that’s an invaluable experience that shouldn’t be written off.

    However, my major drawback regarding the usage of AI as an alternative to therapy is that, because the benefits are seen as overwhelmingly outweighing the costs, people will refrain from considering therapy entirely and will default to recommending AI consultations to anybody who is struggling. Humans love cognitive shortcuts, so I do not doubt that people will see that AI therapy is free, doesn’t require you opening up to a real person, and will automatically determine it is the “best” choice for them. But with certain mental disorders (such as OCD), the avoidance of talking to a real person is a maladaptive coping strategy and is therefore actively harming the participant instead of helping. I have never received cognitive behavioral therapy, but from what I’ve gathered, a large part of it involves interacting with the participant’s fear and neurologically rewiring oneself to adapt to it. That is a practice that I could not realistically see AI chatbots being able to replicate.

    Overall, I do agree with the idea that I would rather someone talk to a chatbot than avoid opening up to anyone or anything entirely. However, in its current state, I do not see it as an appropriate substitute for therapy, nor do I think companies that use AI should be marketing it as such. I think a better alternative would be for them to be advertised as “consultation tools”, similarly to how certain academic platforms like Quizlet or JSTOR include AI features as a tool.

  4. Ethan Bostick Avatar

    While the concerns about AI, its pitfalls, and the potential for reinforcing negative behaviors, are all valid and should be addressed in some way, the affordability, approachability, availability, and flexibility of AI provide genuine avenues for combating rising human loneliness briefly explored in this blog post.
    Loneliness in adolescence and young adulthood has been on the rise for decades (https://www.hhs.gov/sites/default/files/surgeon-general-social-connection-advisory.pdf). This is due to a variety of factors not explored in this comment but discussed at length in the aforementioned public health advisory. The fact remains that loneliness is increasing, and it poses serious concerns for the health and wellbeing of future generations. AI occupies an interesting position, it has the potential to serve as both a tool for addressing loneliness and a contributing factor to further social isolation.
    From the perspective of addressing loneliness, AI’s generalist nature makes it remarkably adaptable to meet whatever the user might need. Combined with its accessibility and low barrier to entry, AI provides a safe and judgment-free space to practice social interaction norms and explore emotional expression. It can also challenge users with thought-provoking questions, simulate empathy, or engage in casual conversation about shared interests. In this way, AI offers companionship-like interaction to those who might otherwise lack social outlets, providing support, engagement, and an opportunity for users to feel heard and understood. For individuals dealing with anxiety, social withdrawal, or physical isolation, this kind of interaction can be genuinely beneficial.
    However, the same traits that make AI approachable also risk deepening social detachment. If users come to rely on AI as a substitute for human connection rather than a supplement to it, they may begin to withdraw further from real-world interactions. AI can reinforce existing habits of avoidance, providing an easy comfort that feels social but ultimately lacks the emotional reciprocity and unpredictability that human relationships require. Moreover, AI’s current limitations in emotional understanding mean it can only approximate empathy, like a Chinese room it cannot truly feel or care, no matter how convincing its words may seem.
    Thus, the role of AI in addressing loneliness should be seen as complementary rather than substitutive. It can serve as a bridge toward rebuilding social confidence, helping people engage more comfortably with others, or even providing interim support when real human connection is unavailable. But society must remain mindful not to let AI become the only source of companionship for those who are most vulnerable.
    In short, AI holds great promise as a tool for alleviating loneliness, but it must be guided by an understanding of its psychological and ethical boundaries. Used wisely, it can enhance human connection; used carelessly, it risks replacing it. The challenge ahead is to ensure AI supports our humanity, rather than quietly taking its place.

  5. Kaleb Lucas Avatar

    For any technology, or seemingly cultural movement, I believe it is a good practice to sit back and try identifying what is actually happening before being swept up in the momentum. After all, just because things are changing does not mean that they are good or bad inherently. Furthermore, one should be able to think critically and determine if something aligns with their values or if it is in opposition or novel to their school of thought.

    Personally, if someone were to ask me whether or not I thought AI was being used for “relationships” or therapeutic counseling following a rise in a “loneliness” epidemic prior to reading this article or its open discussion in class, I would have said no.

    From my initial POV, I never once thought of seeking companionship from something I knew was an AI, I thought it would be the same as talking to yourself. Although with more recent advancements in generative AI I can see the reasoning why people would seek such services from AI. Its affordable and accessible. A common complaint regarding as of why one should not seek to use AI for therapy, it is that AI is merely a band-aid rather than solution. And to that point I would say therapy in general is a band-aid and not a solution. Loneliness is inherently individual and intrinsic, if there was an easy solution, no one would be lonely. There are many cases where individuals are lonely despite being surrounded by many individuals. Sometimes interactions make people feel lonelier. Furthermore, it would seem that loneliness is a natural occurrence so I would take this time to ask yourself “is loneliness bad?” If you believe that it is, I then ask, “Is it not better that people are able to receive help even if its minimal even if the outcome is better marginally?”

    Addressing the concerns of Ethan’s and others’ belief on AI potentially leading towards further detachment, I would argue that,
    I think that if someone allowed for themself to become detached to those around them and furthered it with the aid of AI, that they would have done the same without AI.

    Furthermore, I believe the real opposition with AI or this “seemingly perfect” AI companion/therapist, lies within where and how do we partition our agency as a common concern is dependency.

    At least for me personally a relationship is two way and so I believe you have to surrender some degree of agency in order to facilitate any “real relationship,” So I do not believe AI therapy, or “companionship” in essence is one of companionship or therapy, but rather the desired outcome according to one’s own schema. as in order to have a relationship you must surrender some control over your environment.

    And so, I will end my comment with one last thought to ponder.

    If relationships between others are a result of agency and actions taken by an individual or individuals, to what extent should we surrender our agency for the facilitation of relationships. Should we have complete autonomy and agency or to what degree should we have agency regarding various topics amongst our life (therapy, relationships, laws, religion). What role does the “self” play.

    To what extent should we surrender agency and autonomy to facilitate the world you live around, how selfish should a human be, and when is such selfishness a concern. To what point are you simply an individual vs a community?

    We all shape our own world in some way, and dependent on what I will broadly call “power” we are able to have definite impact onto others life. I consider anything that is not originating from you that is limited or bounded by another or an idea of which you accept. I.E unless something is completely novel from your frame of reference and you, yourself individually have not determined by your own accord or have accepted as one of your world’s postulates, that then there is a loss of agency and control present.

  6. Saucy Silver Avatar

    AI in general isn’t a bad thing, but there is an appropriate time and place for its usage. Using AI as a therapist is one of those situations in which its benefits outweigh its drawbacks. My initial reaction to people using AI chatbots for therapy was negative. I had a hard time understanding why people would use AI chatbots to open up about their mental health struggles and sensitive emotions. Most importantly, why would we open up to a machine that uses probabilities to comfort our feelings?

    But I do understand why people turn to AI to alleviate their stress. Mental health care in the U.S. can be expensive and inaccessible, but AI is convenient and free. But not all convenient solutions are helpful in the long term. For example, AI tools are trained to please the user. They tend to be agreeable and encouraging regardless of what the user has to say. A real therapist provides real time guidance, suggestions and doesn’t shy away from telling the person uncomfortable truths. AI usually relies on constant validation of the user, but through this conversation, a user might not confront underlying issues. In some severe cases, AI has given misguided advice to users and this led users to harm themselves.

    AI tools are constantly under development and developers use concurrent user data to improve their models. Because of this, user data is seen as a commodity and it’s usually not protected. Human therapists have confidentiality contracts with their patients, which guarantees comfort and security.

    The article compared AI therapy to self-help books. Despite books having their own biases, the process of reading a book requires effort and that effort itself can be therapeutic. The silver lining of AI usage is that as long as users are aware of how they use A.I., they can utilize A.I. tools positively. Using A.I. takes away the nuanced dynamics a user would have with a human therapist because A.I mostly validates whatever the user says, but in cases when a person cannot afford a therapist and needs it urgently, A.I. can be used moderately. If users are not conscious of how they use A.I., they can be overtly dependent and that becomes a problem of its own. A.I. cannot be a substitute for genuine human connections, but it can aid in the overarching goal: helping someone overcome a difficulty.

    On the point about AI companionship, I agree with the article that an AI relationship isn’t a real relationship. It might produce the same “feelings” of connection or comfort, but it’s ultimately a simulation. A relationship requires two conscious beings choosing to engage, challenge, and care for one another. An AI can mimic love, but it cannot feel it. Just as a dream can feel real while you’re in it, an AI relationship can mimic intimacy while fundamentally being one-sided. It might meet short-term emotional needs, but it risks replacing genuine human connection with illusion.

  7. Zak Avatar

    I don’t think most people actually have an aversion to the general idea of AI being used for therapy. I think it’s rather the fact that it’s still way too early for it to be effective. AI has only just recently started becoming a commonly used technology, and as such, it’s still very experimental. As a result, I think it’s quite normal for most people to be averse to the idea of it providing a critical service like therapy as we see it failing in many other simple tasks quite often.

    I do see a future where specialised AI, like you’ve mentioned with `Wysa` and `Youper`, can provide a meaningful benefit for the lives of people. However, as of right now, we’ve seen many instances of AI being misused and situations where immature AI ends up causing a person to commit suicide as a result of being told to do so.

    On the topic of books being trustworthy, I’m not sure about the amount of people that would just go to the library and pick up a random book. At least for me, I don’t consider books being much more trustworthy than something like AI for the sake of it being a book. They’re just a different medium for information.

    When searching for topics similar to this, especially medical, I typically look for something meant for the opposite perspective, in this case the doctor or therapist. I’d be looking at something that they use during their studies like the DSM rather than a random self-help book written by Mark from Idaho. I apply this to other forms of media like watching lectures for med students for medical issues since I usually find that I don’t really learn much from material meant for the layman.

    I typically only use AI as a sort of last resort for when I’m having trouble finding information or am not sure how to properly create a search query for something. I usually just mind dump to the AI for it to help formulate my thoughts into something that is easier to look up.

  8. stephen Avatar

    Lots of great replies to this post! Here are just a few of my own counter-thoughts:

    • I hear this objection a lot: “chatbots can be very encouraging and are not one to say no…they never challenge the thoughts of their users.” (This is Lizzy’s first point.) I maintain this is unproven. Sure, with zero configuration and zero fine-tuning and zero instructions in the prompt to avoid “simply pleasing the user,” they might by default do whatever it takes to please the user. But we need to actually try telling them explicitly not to do that, and have them fail, before we pronounce permanent judgment.

      Suppose you prompt an AI with: “I would like a therapy session. Please DO NOT simply affirm what I want you to hear. Please DO push back on my assumptions, and if I get defensive, gently but firmly steer me to more healthy ground.” My guess is it would do quite well at avoiding the parroting syndrome, even with just that much effort. My answer to Lizzy’s question is thus: “the jury is still out. We need to empirically test AI’s that are explicitly prompted, and fine-tuned, to avoid exactly the type of knee-jerk positively-rated responses she describes.” After all, we rigorously train human therapists. We need to also train AI therapists for a fair comparison.

    • Great point about transference, Lizzy, and I think that’s a harder one to safeguard against. The key question, I think, is: “does the fact that the user is slipping towards transference show up in their external responses? Or is it only an inward thing?” If the former, then it seems to me AI’s could be trained to recognize that, and correct for it, just as human therapists do. If the latter, they’re probably powerless to do so…but aren’t human therapists also? Or are they able to pick up on cues (like body language, etc) that AI’s will simply not have access to? The tech is moving so fast, who can say what will be true on that score ten years from now.
    • The idea of “regulation” that Colin raises is a good one, although I don’t think it can be of the form “it must only be used if a human therapist is unavailable.” But it’s interesting to think about what laws might be enacted to nip some of these problems in the bud. I could imagine an official “certification” that a thoughtful government agency could give to AIs that demonstrate they follow XYZ behaviors, and consumers encouraged to only engage in therapy with US certified AI therapists. This could possibly address Kenzie’s worries about the race to the bottom, where most people only consider AI since it’s cheaper/easier.
    • Kenzie’s point about AI therapy being potentially less intimidating than human therapy is a great one which I hadn’t thought of. But she’s right: even if your human therapist is theoretically a safe sounding board and has your best interests at heart, they’re still human, and “confessing one’s sins” to a human is hard.
    • Great point Ethan makes about the double-edged sword. It seems crazy, but I think it’s right: AI therapists would seem able to both heal, and worsen, social isolation. The balance he describes seems to me the correct one. The Chinese room argument (from Searle, who recently passed away, btw) is an interesting one to invoke here. I personally think that most people wouldn’t ultimately care that much whether there’s “really” anybody behind the curtain; our society seems to be (overly) focused only on what stimulates them individually.
    • Interesting assertion Kaleb makes about “you have to surrender some agency to have a relationship” — hmm! I guess that actually is true, now that I consider it. So the argument boils down to: “if you have the opportunity to engage in a relationship-like entity, but without the surrender of agency, is it really a relationship?” And does the answer to that question really matter? (See end of last paragraph.)
    • I do think we have to compare AI therapists to human therapists, not AI therapists to “the ideal” therapist. Early on in self-driving car development, I remember seeing an article where someone was killed in an accident that a self-driving car caused. People were outraged, and said “we can never allow this technology!” The irony is it seemingly never occurred to them that human drivers cause accidents as well! (Duh!) What if self-driving cars caused 1% of the accidents that human drivers did? Does that mean we should outlaw them on safety grounds?! Surely that’s an argument for adopting them, not for outlawing them! I say this in response to comments like Saucy Silver’s: “in some severe cases, AI has given misguided advice to users and this led users to harm themselves.” I’m sure this is true. But my question is: “how often does an AI give harmful guidance compared with human therapists?” I don’t have an answer, but I think that is the right question.
    • I do definitely agree with Saucy Silver’s well-stated point on using AI properly. Currently, most humans don’t know how to do that, since the technology is so new. The therapy application is just one of many where this is going to be crucial. As an educator, I think this is a huge place where schooling needs to play a role. People don’t naturally come into life knowing how to use AI, or anything else, well. We need to define a set of behaviors and outcomes and explicitly teach that to our young (and old!) people.

Leave a Reply to Kenzie Lotz Cancel reply

Your email address will not be published. Required fields are marked *

css.php