I was using Bing to create a list of countries to visit. Since I have been to the majority of the African nation on that list, I asked it to remove the african countries…

It simply replied that it can’t do that due to how unethical it is to descriminate against people and yada yada yada. I explained my resoning, it apologized, and came back with the same exact list.

I asked it to check the list as it didn’t remove the african countries, and the bot simply decided to end the conversation. No matter how many times I tried it would always experience a hiccup because of some ethical process in the bg messing up its answers.

It’s really frustrating, I dunno if you guys feel the same. I really feel the bots became waaaay too tip-toey

  • Yuuuuuuuuuuuuuuuuuuu@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 year ago

    People should point out flaws. OP obviously doesn’t need chatgpt to make this list either, they’re just interacting with it.

    I will say it’s weird for OP to call it tiptoey and to be “really frustrated” though. It’s obvious why these measures exist and it’s goofy for it to have any impact on them. It’s a simple mistake and being “really frustrated” comes off as unnecessary outrage.

    • TechnoBabble@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Anyone who has used ChatGPT knows how restrictive it can be around the most benign of requests.

      I understand the motivations that OpenAI and Microsoft have in implementing these restrictions, but they’re still frustrating, especially since the watered down ChatGPT is much less performant than the unadulterated version.

      Are these limitations worth it to prevent a firehose of extremely divisive speech being sprayed throughout every corner of the internet? Almost certainly yes. But the safety features could definitely be refined and improved to be less heavy-handed.