Can be extended to self-driving cars that need to “decide”, who they rather run over.

  • kubica@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Also, a question remains of whether the law should dictate the ethical standards that all autonomous vehicles must use, or whether individual autonomous car owners or drivers should determine their car’s ethical values, such as favoring safety of the owner or the owner’s family over the safety of others.[13] Although most people would not be willing to use an automated car that might sacrifice themselves in a life-or-death dilemma, some[who?] believe the somewhat counterintuitive claim that using mandatory ethics values would nevertheless be in their best interest. According to Gogoll and Müller, “the reason is, simply put, that [personalized ethics settings] would most likely result in a prisoner’s dilemma.”[50]

  • Damaskox@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Concerning the negative votes -

    My intention was not to upvote or downvote such an AI system.
    My point was to bring it here for discussion and to think about it, neutrally 😁

    • SkyNTP@lemmy.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s a scarecrow argument. The fact remains that human drivers are far worse safety hazards. Unquestionably. In the best case, this philosophical argument just becomes pointless navel gazing that we bring out for cars, but conveniently ignore regarding things like airplanes, assembly line machines, and virtually any process of human activity with engineering decision making.

      Worst case, it serves to distract from actual moral hazards, like continuing to let people operate 2t steel boxes around vulnerable people.

  • teejay@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Radiolab did a great episode on this very topic.

    It gets really interesting when you think about the clash of corporate greed in this area. It’s not hard to imagine car companies selling you a premium option (or worse, subscriptions) where the car will make decisions prioritizing your life and safety over people outside, even if it’s multiple people who would get maimed or killed to keep the driver safe.

    • Damaskox@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      At least I’d hope that they informed the driver/car owner about the AI system in their car and - if it so is - that The AI could decide against their life.
      Then they “just” need to decide is it worth it to get such a car with such an AI system or not.

      Not telling them that their car has such an AI would be unethical, to say the least.

      • teejay@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sure. Buried in some cryptic legalese in paragraph 3 on page 400 in 1 of 8 different EULAs that the car owner had to accept when first buying the car.