• reksas@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    3 months ago

    it wouldnt be label, that wouldnt do anything since it could just be erased. It should be something like invisible set of pixels on pictures or some inaudible soundpattern on sounds that can be detected in some way.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      3 months ago

      But it’s irrelevant. You can watermark all you want in the algorithms you control, but it doesn’t change the underlying fact that pictures have been capable of lying for years.

      People just recognizing that a picture is not evidence of anything is better.

      • reksas@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Yes, but reason why people dont already consider pictures irrelevant is that it takes time and effort to manipulate a picture. With ai not only is it fast it can be automated. Of course you shouldnt accept something so unreliable as legal evidence but this will spill over to everything else too

        • conciselyverbose@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          It doesn’t matter. Any time there are any stakes at all (and plenty of times there aren’t), there’s someone who will do the work.

          • reksas@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            It doesnt matter if you cant trust anything you see? What if you couldn’t be sure if you weren’t talking to bot right now?

            • conciselyverbose@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              3 months ago

              Photos/video from unknown sources have already been completely worthless as evidence for a solid decade. If you used a random picture online to prove a point 5 years ago, you were wrong. This does not change that reality in any way.

              The only thing changing is your awareness that they’re not credible.

              • reksas@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 months ago

                What about reliable sources becoming less reliable? Knowing something is not credible doesn’t help if i can’t know what is credible

                • conciselyverbose@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  They are not reliable sources. You cannot become less reliable than “not at all”, and that has been the state of pictures and videos for many years already. There is absolutely no change to the evidentiary value of pictures/video.

                  Making the information more readily available does not change the reality that pictures aren’t evidence.

                  • reksas@sopuli.xyz
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 months ago

                    I’m not talking about evidence, i’m talking about fundamendal being able to trust anything digital at all in any context. What if you couldnt be sure if phonecall from your friend was actually from your friend or if you cant be sure about any picture shown to you if its actually about some real thing.

                    Things you need to be able to trust in daily life dont have to be court-level evidence. That is what abuse of ai will take from us.