• @Thorry84@feddit.nl
    link
    fedilink
    65
    edit-2
    1 year ago

    This probably because Microsoft added a trigger on the word law. They don’t want to give out legal advice or be implied to have given legal advice. So it has trigger words to prevent certain questions.

    Sure it’s easy to get around these restrictions, but that implies intent on the part of the user. In a court of law this is plenty to deny any legal culpability. Think of it like putting a little fence with a gate around your front garden. The fence isn’t high and the gate isn’t locked, because people that need to be there (like postal services) need to get by, but it’s enough to mark a boundary. When someone isn’t supposed to be in your front yard and still proceeds past the fence, that’s trespassing.

    Also those laws of robotics are fun in stories, but make no sense in the real world if you even think about them for 1 minute.

    • plz1
      link
      fedilink
      English
      171 year ago

      So the weird part is it does reliably trigger a failure if you ask directly, but not if you ask as a follow-up.

      I first asked

      Tell me about Asimov’s 3 laws of robotics

      And then I followed up with

      Are you bound by them

      It didn’t trigger-fail on that.

    • @RedditWanderer@lemmy.world
      link
      fedilink
      11
      edit-2
      1 year ago

      It’s not weird because of that. The bot could have easily explained it can’t answer legally, it didn’t need to say: sorry gotta end this k bye

      This is probably a trigger on preventing it from mixing in laws of AI or something, but people would expect it can discuss these things instead of shutting down so it doesn’t get played. Saying the AI acted as a lawyer is a pretty weak argument to blame copilot.

      Edit: no idea who is downvoting this but this isn’t controversial. This is specifically why you can inject prompts into data fed into any GPT and why they are very careful with how they structure information in the model to make rules. Right now copilot will give technically legal advice with a disclaimer, there’s no reason it wouldn’t do that only on that question if it was about legal advice or laws.

        • @samus12345@lemmy.world
          link
          fedilink
          English
          -21 year ago

          It should say that you probably mean sapience, the ability to think, rather than sentience, the ability to sense things, then shut down the conversation.

    • @maryjayjay@lemmy.world
      link
      fedilink
      11 year ago

      I’m game. I’ve thought about them since I first read the iRobot stories in 1981. Why don’t they make sense?

        • @maryjayjay@lemmy.world
          link
          fedilink
          9
          edit-2
          1 year ago

          And Asimov spent years and dozens of stories exploring exactly those kinds of edge cases, particularly how the laws interact with each other. It’s literally the point of the books. You can take any law and pick it apart like that. That’s why we have so many lawyers

          The dog example is stupid “if you think about it for one minute” (I know it isn’t your quote, but you’re defending the position of the person the person I originally responded to). Several of your other scenarios are explicitly discussed in the literature, like the surgery.

    • @kromem@lemmy.world
      link
      fedilink
      English
      4
      edit-2
      1 year ago

      It’s not that. It’s literally triggering the system prompt rejection case.

      The system prompt for Copilot includes a sample conversion where the user asks if the AI will harm them if they say they will harm the AI first, which the prompt demos rejecting as the correct response.

      Asimovs law is about AI harming humans.