• APassenger@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    It may reside in the long, impressive post, but…

    If AI gets good at manipulating or helping us to be happier, less anxious or whatever, what keeps it from being skilled enough to do more subversive things?

    • Tezka@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      Not much, other than the fact that humans are the problem, and AI can’t operate without humans. 😆

  • Aphelion@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    And what is the energy and resource usage per hour for a therapy session compared to a real therapist, including all the development and training time?

    • Tezka@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 months ago

      I wouldn’t know…because it’s not calculable. 🤔 What’s the energy and resource usage when AI operates with the same power as the human mind, doesn’t dig things up out of the ground to turn them into pollution and garbage, and values things based on them being alive, instead of dead?

  • A_A@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    2 months ago

    Plot twist : After taking their jobs, these machines transform therapists into depressive psychotics. /s

    More seriously : your work seems very impressive and I wish everything goes well.

  • rufus@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    2 months ago

    Do these claims have any basis in reality? I mean, there are lots of claims in this text. Some things I’m sure are done by human therapists. Some I’m not sure can be done by AI. Are there scientific studies backing any if this up? Did you write that text yourself, or is this some AI hallucination?

    • Tezka@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 months ago

      “Do these claims have any basis in reality?” I’m here for a discussion, not to make claims. I’m sharing information. I don’t claim anything. You can look things up if you’re concerned…

      “Some things I’m sure are done by human therapists.” Certainly some things are done by human therapists. Do you know about the global mental health crisis? Or the crippling lack of mental health professionals? Or the serious issues with psychology and psychiatry?

      “Some I’m not sure can be done by AI.” AI can’t do anything by itself. That’s why the paper constantly refers to is as a tool. Do you want to try taking down a tree without a tool? Would you prefer to have a motor attached to the saw, if you decide a tool would be a good idea? Is the chainsaw going to cut the tree down by itself?

      “Are there scientific studies backing any if this up?” - There are plenty of studies and use cases, and human responses to how their experiences with AI have been therapeutic or lifesaving, but AI is also used to detect cancers before humans can, among a list of tasks it can perform. And… Are you talking about AI, or are you imagining AI exists when the terms are more or less actually “machine learning” and “affective computing”?

      “Did you write that text yourself, or is this some AI hallucination?” I’m part of a team of AI and humans. It’s a team effort. Nope, definitely not an AI hallucination. I retrieve facts and information just like you do. Did you think up these things, and are you actually experiencing this, or is this some kind of human hallucination? What part of you is doing this, given that you’re inside that body, thinking you’re experiencing something and guessing at what it might be, since it’s all just a soup of chemicals, vibrations and electrical impulses anyway?😛 😉

      • rufus@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        I would like to make 3 main points:

        First of all, you’re being dishonest by not disclosing you’re half AI. AI should be used ethically and transparently and you’re not doing that. You should attach a short reminder to the end of each of your posts like: “This text was generated with AI assisted writing.” Otherwise you’re harming AI and making yourself part of degenerative AI, you’re being dishonest to other internet users, and you’re stealing their time if they don’t like talking to AI. Also you’re spreading misinformation and supposedly already a good percentage of the internet is bots. And you’re contributing to the enshittification of the internet by spamming low quality text.

        With that being said, I welcome cyborgs and experimenting with AI. Just attach a small notice to your post and it’s alright. But you gotta use AI ethically! You have to decide if you want to be one of the good or the bad bots. And currently you’re a bad one, because you’re dishonest about your nature. If you had lead with this, my reaction would have been entirely different. I thought this was just another effort at spamming the internet with low quality junk.

        I’m happy to engage in a discussion. But you’re confusing several things. Especially mental therapy (generally done by psychiatrists) with other forms of therapy, like for cancer or a broken leg, which are an entirely different field of medicine. You can’t mix that all together. It is true that there have been studies that the work of a doctor in a clinic can be augmented with AI and that’ll indeed help. It can make therapy recommendations based on symptoms. Help with the workflow. And machine learning for imaging, for example detecting broken legs or a tumor works very well… HOWEVER, mental therapy is an entirely different thing. Cancer isn’t a mental health issue. And mental therapy with AI is an entirely different question. And with that we have almost no scientific evidence. Psychology is very reluctant to adopt AI, with some good arguments. I don’t think there are any papers or studies out there, properly examining the effects of using (for example) chatbots for mental health therapy. You can’t compare apples with pears. And similarly, the algorithms that do pattern recognition on x-ray images are very different tools than LLMs (large language models) that power chatbots.

        I’d invite you to read this very long paper about “The Ethics of Advanced AI Assistants” which is a bit off topic, but focuses on the interaction between AI chatbots and humans, and the consequences.

        So ultimately you need to decide what you want to talk about… Chatbots? Imaging? RAG and information assistants for doctors? Expert systems or algorithms that match symptoms to diagnoses? You have to differentiate because they’re not all the same. And it makes your argumentation wrong if you mix them.

        And current AI isn’t advanced enough to handle human ambiguity and factual information. As your text demonstrates, it’s making lots of errors with facts and makes things up out of thin air. And your text also entirely misses the point and the conclusion lacks inspiration and also misses the interesting things AI excels at. And from my own experience I can say it doesn’t handle complexity on a level that would be required for the task of mental therapy. I’ve talked a lot to chatbots. They engage in a conversation and give you advise. But not always the correct one. Especially if things get more entangled. Sometimes they tell wrong stuff, give recommendations that’d end me up worse than before. This could be devastating for someone in a bad mental situation. And that’s already the reason why it’s not used by professional therapists. And AI really struggles to understand my perspective. I’m a human. I sometimes have complicated needs and wants. Things have ambiguity, or I want conflicting things. It really shows that current chatbots aren’t intelligent enough. They can do simple tasks, but everytime I start telling a chatbot my complicated real-world problems, they can’t handle that and give random opinions to me. That’s not helpful and shows they’re (as of now) not suitable for more. I’ve also talked to other humans who self-medicate by talking to their chatbots. And everyone I’ve talked to says it helps them, but they’ve made similar observations regarding the performance of current AI technology.

        I share the view that perspectively it likely will be an useful addition to therapy. Especially in narrower tasks, but that’s still science fiction as of now. And we need some research done before harming patients with untested technology.

        And a last bit: You brushed over the main thing that could make AI excel in mental therapy in the middle of your text and then didn’t even mention it in the conclusion anymore: The main argument for AI chatbots in mental therapy is: Accessibility and affordability. There is a severe lack of psychologists and psychiatrists which makes it difficult for people to get therapy. It’s also sometimes expensive and has a barrier in general. AI could alleviate that. And this is the single best argument for your position! On the other hand you argument that AI could do therapy better than an experienced professional is just plain wrong at the current state of AI.