• Echo Dot
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    2 months ago

    Actually hallucinations have gone down as AI training has increased. Mostly through things like prompting them to provide evidence. When you prompt them to provide evidence they don’t hallucinate in the first place.

    The problem is really to do with the way the older AIs were originally trained. They were basically trained on data where a question was asked, and then a response was given. Nowhere in the data set was there a question that was asked, and the answer was “I’m sorry I do not know”, so the AI basically was unintentionally taught that it is never acceptable to not answer a question. More modern AI have been trained in a better way and have been told it is acceptable not to answer a question. Combined with the fact that they now have the ability to perform internet searches, so like a human they can go look up data if they recognize that they don’t have access to it in their current data set.

    That being said, Google’s AI is an idiot.