Driverless cars worse at detecting children and darker-skinned pedestrians say scientists::Researchers call for tighter regulations following major age and race-based discrepancies in AI autonomous systems.

  • Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    7
    ·
    edit-2
    10 months ago

    I hate all this bias bullshit because it makes the problem bigger than it actually is and passes the wrong idea to the general public.

    A pedestrian detection system shouldn’t have as its goal to detect skin tones and different pedestrian sizes equally. There’s no benefit in that. It should do the best it can to reduce the false negative rates of pedestrian detection regardless, and hopefully do better than human drivers in the majority of scenarios. The error rates will be different due to the very nature of the task, and that’s ok.

    This is what actually happens during research for the most part, but the media loves to stir some polarization and the public gives their clicks. Pushing for a “reduced bias model” is actually detrimental to the overall performance, because it incentivizes development of models that perform worse in scenarios they could have an edge just to serve an artificial demand for reduced bias.

    • zabadoh@lemmy.ml
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      8
      ·
      10 months ago

      I think you’re misunderstanding what the article is saying.

      You’re correct that it isn’t the job of a system to detect someone’s skin color, and judge those people by it.

      But the fact that AVs detect dark skinned people and short people at a lower effectiveness is a reflection of the lack of diversity in the tech staff designing and testing these systems as a whole.

      They staff are designing the AVs to safely navigate in a world of people like them, but when the staff are overwhelmingly male, light skinned, young and single, and urban, and in the United States, a lot of considerations don’t even cross their minds.

      Will the AVs recognize female pedestrians?

      Do the sensors sense light spectrum wide enough to detect dark skinned people?

      Will the AVs recognize someone with a walker or in a wheelchair, or some other mobility device?

      Toddlers are small and unpredictable.

      Bicyclists can fall over at any moment.

      Are all these AVs being tested in cities being exposed to all the animals they might encounter in rural areas like sheep, llamas, otters, alligators and other animals who might be in the road?

      How well will AVs tested in urban areas fare on twisty mountain roads that suddenly change from multi lane asphalt to narrow twisty dirt roads?

      Will they recognize tractors and other farm or industrial vehicles on the road?

      Will they recognize something you only encounter in a foreign country like an elephant or an orangutan or a rickshaw? Or what’s it going to do if it comes across that tomato festival in Spain?

      Engineering isn’t magical: It’s the result of centuries of experimentation and recorded knowledge of what works and doesn’t work.

      Releasing AVs on the entire world without testing them on every little thing they might encounter is just asking for trouble.

      What’s required for safe driving without human intelligence is more mind boggling the more you think about it.

      • rDrDr@lemmy.world
        link
        fedilink
        English
        arrow-up
        23
        arrow-down
        4
        ·
        10 months ago

        But the fact that AVs detect dark skinned people and short people at a lower effectiveness is a reflection of the lack of diversity in the tech staff designing and testing these systems as a whole.

        No, it isn’t. Its a product of the fact that dark people are darker and children are smaller. Human drivers have a harder time seeing these individuals too. They literally send less data to the camera sensor. This is why people wear reflective vests for safety at night, and ninjas dress in black.

        • ashok36@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          10 months ago

          This is true but tesla and others could compensate for this by spending more time and money training on those form factors, something humans can’t really do. It’s an opportunity for them to prove the superhuman capabilities of their systems.

        • lud@lemm.ee
          link
          fedilink
          English
          arrow-up
          4
          ·
          10 months ago

          That doesn’t make it better.

          It doesn’t matter why they are bad at detecting X, it should be improved regardless.

          Also maybe Lidarr would be a better idea.

        • HelloThere@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          10 months ago

          They literally send less data to the camera sensor.

          So maybe let’s not limit ourselves to using hardware which cannot easily differentiate when there is other hardware, or combinations of hardware, which can do a better job at it?

          Humans can’t really get better eyes, but we can use more appropriate hardware in machines to accomplish the task.

        • SocialMediaRefugee@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          10 months ago

          That is true. I almost hit a dark guy, wearing black, who was crossing a street at night with no streetlight as I turned into it. Almost gave me a heart attack. It is bad enough almost getting hit, as a white guy, when I cross a street with a streetlight.

      • Eager Eagle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        10 months ago

        These are important questions, but addressing them for each model built independently and optimizing for a low “racial bias” is the wrong approach.

        In academia we have reference datasets that serve as standard benchmarks for data driven prediction models like pedestrian detection. The numbers obtained on these datasets are usually the referentials used when comparing different models. By building comprehensive datasets we get models that work well across a multitude of scenarios.

        Those are all good questions, but need to be addressed when building such datasets. And whether model M performs X% better to detect people of that skin color is not relevant, as long as the error rate of any skin color is not out of an acceptable rate.

    • SocialMediaRefugee@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      5
      ·
      10 months ago

      The media has become ridiculously racist, they go out of their way to make every incident appear to be racial now