• Sibbo@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    10 months ago

    How can the training data be sensitive, if noone ever agreed to give their sensitive data to OpenAI?

    • TWeaK@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 months ago

      Exactly this. And how can an AI which “doesn’t have the source material” in its database be able to recall such information?

      • Jordan117@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        IIRC based on the source paper the “verbatim” text is common stuff like legal boilerplate, shared code snippets, book jacket blurbs, alphabetical lists of countries, and other text repeated countless times across the web. It’s the text equivalent of DALL-E “memorizing” a meme template or a stock image – it doesn’t mean all or even most of the training data is stored within the model, just that certain pieces of highly duplicated data have ascended to the level of concept and can be reproduced under unusual circumstances.