Researchers at MIT, NYU, and UCLA develop an approach to help evaluate whether large language models like GPT-4 are equitable enough to be clinically viable for mental health support.
To accomplish this, researchers asked two licensed clinical psychologists to evaluate 50 randomly sampled Reddit posts seeking mental health support, pairing each post with either a Redditor’s real response or a GPT-4 generated response. Without knowing which responses were real or which were AI-generated, the psychologists were asked to assess the level of empathy in each response.
I am doubtful that this framework is going to accurately detect anything at all about the usefulness of chatbots in this context, whether about race or anything else.
I don’t think using chatbots for psychology is a good idea, but this study isn’t the way to study and make that determination.
The problem with using GPT as it is currently, you can ask it the same question 27 tomes and get 18 different answers. One of them a hallucination.
As we speak, Elon Musk has his best engineers working on developing artificial racism
Chat bots are already racist. They just have to let it run wild.
It’s physically impossible for an LLM to hold prejudice.
You are so entirely mistaken. AI is just as biased as the data it is trained on. That applies to machine learning as well as LLMs.