• suburban_hillbilly@lemmy.ml
    link
    fedilink
    arrow-up
    29
    arrow-down
    8
    ·
    edit-2
    4 months ago

    photos

    They aren’t photos. They’re photorealistic drawings done by computer algorithms. This might seem like a tiny quibble to many, but as far as I can tell it is the crux of the entire issue.

    There isn’t any actual private information about the girls being disclosed. The algorithms, for example, do not and could not know about and produce an unseen birthmark, mole, tattoo, piercing, etc. A photograph would have that information. What is being shown is an approxomation of what similar looking girls in the training set look like, with the girls’ faces stiched on top. That is categorically different than something like revenge porn which is purely private information specific to the individual.

    I’m sure it doesn’t feel all that different to the girls in the pics, or to the boys looking at it for that matter. There is some degree of harm here without question. But we must tread lightly because there is real danger in categorizing algorithmic guesswork as reliable which many authoritarian types are desperate to do.

    https://www.wired.com/story/parabon-nanolabs-dna-face-models-police-facial-recognition/

    This is the other side of the same coin. We cannot start treating the output of neural networks as facts. These are error prone black-boxes and that fact must be driven hard into the consciousness of every living person.

    For some, I’m sure purely unrelated reason, I feel like reading Phillip K Dick again…

    • KillingTimeItself@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      4 months ago

      They aren’t photos. They’re photorealistic drawings done by computer algorithms. This might seem like a tiny quibble to many, but as far as I can tell it is the crux of the entire issue.

      most phone cameras alter the original image with AI shit now, it’s really common, they apply all kinds of weird correction to make it look better. Plus if it’s social media there’s probably a filter somewhere in there. At what point does this become the ship of thesseus?

      my point here, is that if we’re arguing that AI images are semantically, not photos, than most photos on the internet including people would also arguably, not be photos to some degree.

      • suburban_hillbilly@lemmy.ml
        link
        fedilink
        arrow-up
        6
        arrow-down
        1
        ·
        4 months ago

        The difference is that a manipulated photo starts with a photo. It actually contains recorded information about the subject. Deepfakes do not contain any recorded information about the subject unless that subject is also in the training set.

        Yes it is semantics, it’s the reason why we have different words for photography and drawing and they are not interchangeable.

        • Rekorse@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          4 months ago

          The deepfakes would contain the prompt image provided by the creator. They did not create a whole new approximation of their face as the entire pool it can pull on for that specific part is a single or group of images provided by the prompter.

        • KillingTimeItself@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 months ago

          Deepfakes do not contain any recorded information about the subject unless that subject is also in the training set.

          this is explicitly, untrue, they literally do. You are just factually wrong about this. While it may not be in the training data, how do you think it manages to replace the face of someone in one picture, with the face of someone else in some other video.

          Do you think it just magically guesses? No, it literally uses a real picture of someone. In fact, back in the day with ganimation and early deepfake software, you literally had to train these AIs on pictures of the person you wanted it to do a faceswap on. Remember all those singing deepfakes that were super popular back a couple of years ago? Yep, those literally trained on real pictures.

          Regardless, you are still ignoring my point. My question here was how do we consider AI content to be “not photo” but consider photos manipulated numerous times, through numerous different processes, which are quite literally, not the original photo, and a literal “photo” to rephrase it simpler for you, and other readers. “why is ai generated content not considered to be a photo, when a heavily altered photo of something that vaugely resembles it’s original photo in most aspects, is considered to be a photo”

          You seem to have missed the entire point of my question entirely. And simply said something wrong instead.

          Yes it is semantics

          no, it’s not, this is a ship of thesseus premise here. The semantics results in how we contextualize and conceptualize things into word form. The problem is not semantics (they are just used to convey the problem at hand), the problem is a philosophical conundrum that has existed for thousands of years.

          in fact, if we’re going by semantics here, technically photograph is rather broad as it literally just defines itself as “something in likeness of” though it defines it as taken by method of photography. We could arguably remove that part of it, and simply use it to refer to something that is a likeness of something else. And we see this is contextual usage of words, a “photographic” copy is often used to describe something that is similar enough to something else, that in terms of a photograph, they appear to be the same thing.

          Think about scanning a paper document, that would be a photographic copy of some physical item. While it is literally taken via means of photography. In a contextual and semantic sense, it just refers to the fact that the digital copy is photographically equivalent to the physical copy.

          • suburban_hillbilly@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            Oh FFS, I clipped the word new. Of course it uses information in the prompt. That’s trivial. No one cares about it returning the information that was given to it in the prompt. Nevertheless, mea culpa. You got me.

            this is a ship of thesseus premise here

            No, it really isn’t.

            The pupose of that paradox is that you unambiguously are recreating/replacing the ship exactly as you already know it is. The reason the ‘ai’ in question here is even being used is that it isn’t doing that. It’s giving you back much more than it was given.

            The comparison would be if Thesues’ ship had been lost and you definitely don’t have the ship anymore, but had managed to recover the sail. If you take the sail to an experienced builder (the ai) who had never seen the ship, then he might be able to build a reasonable approximation based on inferences from the sail and his wealth of knowledge, but nobody is going to be daft enough to assert it is same ship. Does the wheel even have the same number of spokes? Does it have the same number of oars? The same weight of anchor?

            The only way you could even tell if his attempted fascimile was close is if you had already intimate knowledge of the ship from some other source.

            …when a heavily altered photo of something that vaugely resembles it’s original photo in most aspects, is considered to be a photo”

            Disagree.

            • KillingTimeItself@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              No, it really isn’t.

              i would consider it such, you said as much in your original post that the entire crux of the issue is the semantics between a real photograph, as physically taken by the camera, and what could be considered an image, whatever that constitutes, for purposes of semantical arguments here, let’s say digitally drawn art, clip art, whatever doesn’t matter. It’s objectively not a photo, and that’s what matters here.

              The pupose of that paradox is that you unambiguously are recreating/replacing the ship exactly as you already know it is. The reason the ‘ai’ in question here is even being used is that it isn’t doing that. It’s giving you back much more than it was given.

              Yeah so the reason why the thought experiment does this is because it creates an incredibly sterile environment which allows us to easily study and research the question at hand. In this case it’s to boil it down to something as physically close to “objective relation” and “symbolic relation” I.E. the two extremes of the thought experiment at hand. It’s still not easy to define what the true answer to the question is, and that’s why it’s incredibly sterile.

              The comparison would be if Thesues’ ship had been lost and you definitely don’t have the ship anymore, but had managed to recover the sail. If you take the sail to an experienced builder (the ai) who had never seen the ship, then he might be able to build a reasonable approximation based on inferences from the sail and his wealth of knowledge, but nobody is going to be daft enough to assert it is same ship. Does the wheel even have the same number of spokes? Does it have the same number of oars? The same weight of anchor?

              this is not what i was making my statement about. If you read my original comment you might pickup on this one.

              Disagree.

              yes ok, and this is what my thought experiment comparison was about in this case. The specific thing i was asking you was how we define a photo, and how we define an image, because what would normally be constituted as a photo, could arguably be considered to be an image on account of the various levels of image manipulation taking place.

              While rather nitpicky in essence i suppose, the point i’m making here was that your entire statement might be upended entirely based on the fact that the original photo used, may not even be a photo at all, making the entire distinction entirely redundant to begin with. Since you never defined what counts as a “photo” and what counts as an “image” there is no clear distinction between that, other than the assumed AI image manipulation that you talked about. Which like i said, most phones do.

              In short, i don’t think it’s a very good way of conceptualizing the fundamental problem here because it’s rather loose in it’s requirements. If you wanted to argue that the resulting imagery simply is not akin to actual real imagery (in a literal sense), i see no reason to disagree. However, unfortunately the general populous does not care about the semantic definition of whether or not an image is a photo or not. So as far as most people are concerned, it’s either “deep faked” or “real” There is no alternative.

              Legally, since we’d be talking about revenge porn and CP here, i don’t see a reason to differentiate between the semantics, because as far as the law is concerned, and as far as most of the general public is concerned. Someone deep faking revenge porn is arguably, still just revenge porn. While AI generated CP may not be real CP, marrying a 12 year old is legal in some places, it’d still be fucking weird if you did it. If you are creating AI CP, that’s pretty fucking weird, and there isn’t exactly a good argument for doing that. (ignoring the one obvious counter example)

    • daellat@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      I’ve only read do androids dream of electric sheep by him, what other book(s) should I check out by him?

    • Xylogx@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Whether or not you consider them photos, DOJ considers them child porn and you will still go to jail.