OpenAI just admitted it can’t identify AI-generated text. That’s bad for the internet and it could be really bad for AI models.::In January, OpenAI launched a system for identifying AI-generated text. This month, the company scrapped it.

  • EuphoricPenguin@normalcity.life
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 年前

    Unless I’m mistaken, aren’t GANs mostly old news? Most of the current SOTA image generation models and LLMs are either diffusion-based, transformers, or both. GANs can still generate some pretty darn impressive images, even from a few years ago, but they proved hard to steer and were often trained to generate a single kind of image.

    • BackupRainDancer@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 年前

      I haven’t been in decision analytics for a while (and people smarter than I are working on the problem) but I meant more along the lines of the “model collapse” issue. Just because a human gives a thumbs up or down doesn’t make it human written training data to be fed back. Eventually the stuff it outputs becomes “most likely prompt response that this user will thumbs up and accept”. (Note: I’m assuming the thumbs up or down have been pulled back into model feedback).

      Per my understanding that’s not going to remove the core issue which is this:

      Any sort of AI detection arms race is doomed. There is ALWAYS new ‘real’ video for training and even if GANs are a bit outmoded, the core concept of using synthetically generated content to train is a hot thing right now. Technically whomever creates a fake video(s) to train would have a bigger training set than the checkers.

      Since we see model collapse when we feed too much of this back to the model we’re in a bit of an odd place.

      We’ve not even had a LLM available for the entire year but we’re already having trouble distinguishing.

      Making waffles so I only did a light google but I don’t really think chatgpt is leveraging GANs for it’s main algos, simply that the GAN concept could be applied easily to LLM text to further make delineation hard.

      We’re probably going to need a lot more tests and interviews on critical reasoning and logic skills. Which is probably how it should have been but it’ll be weird as that happens.

      sorry if grammar is fuckt - waffles

      • EuphoricPenguin@normalcity.life
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 年前

        So a few tidbits you reminded me of:

        • You’re absolutely right: there’s what’s called an alignment problem between what the human thinks looks superficially like a quality answer and what would actually be a quality answer.

        • You’re correct in that it will always be somewhat of an arms race to detect generated content, as lossy compression and metadata scrubbing can do a lot to make an image unrecognizable to detectors. A few people are trying to create some sort of integrity check for media files, but it would create more privacy issues than it would solve.

        • We’ve had LLMs for quite some time now. I think the most notable release in recent history, aside from ChatGPT, was GPT2 in 2019, as it introduced a lot of people to to the concept. It was one of the first language models that was truly “large,” although they’ve gotten much bigger since the release of GPT3 in 2020. RLHF and the focus on fine-tuning for chat and instructability wasn’t really a thing until the past year.

        • Retraining image models on generated imagery does seem to cause problems, but I’ve noticed fewer issues when people have trained FOSS LLMs on text from OpenAI. In fact, it seems to be a relatively popular way to build training or fine-tuning datasets. Perhaps training a model from scratch could present issues, but generally speaking, training a new model on generated text seems to be less of a problem.

        • Critical reading and thinking was always a requirement, as I believe you say, but certainly it’s something needed for interpreting the output of LLMs in a factual context. I don’t really see LLMs themselves outperforming humans on reasoning at this stage, but the text they generate certainly will make those human traits more of a necessity.

        • Most of the text models released by OpenAI are so-called “Generative Pretrained Transformer” models, with the keyword being “transformer.” Transformers are a separate model architecture from GANs, but are certainly similar in more than a few ways.

        • BackupRainDancer@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 年前

          These all align with my understanding! Only thing I’d mention is that when I said “we’ve not had llms available” I meant “LLMs this powerful ready for public usage”. My b

          • EuphoricPenguin@normalcity.life
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 年前

            Yeah, that’s fair. The early versions GPT3 kinda sucked compared to what we have now. For example, it basically couldn’t rhyme. RLHF or some of the more recent advanced seemed to turbocharge that aspect of LLMs.