• Eager Eagle@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    7 months ago

    Well, it’s not exactly impossible because of that, it’s just unlikely they’ll use a discriminator for the task because great part of generated content is effectively indistinguishable from human-written content - either because the model was prompted to avoid “LLM speak”, or because the text was heavily edited. Thus they’d risk a high false positive rate.