Google, Meta, Microsoft, OpenAI and TikTok outline methods they will use to try to detect and label deceptive AI content

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I am 100% sure that their measures will not only be at best marginally effective, but also that they’ll drop the measures at some point because “they’re unprofitable”.

  • bedrooms@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Err… ChatGPT detectors are like 50% accurate… These “reasonable precautions” translate to “we’ll try, but there’s nothing we can really do.”

  • athos77@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    3 months ago

    Aka, if we pretend to vaguely do something with no consequences for not following through, we can argue that we’re responsive and self-regulating, and hopefully avoid real regulation with teeth.

  • henfredemars@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 months ago

    I guess having ideas about what could be done to address this problem is better than nothing. None of these organizations have demonstrated the capability to actually prevent abuse of AI and proliferation of disinformation.

      • sbv@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 months ago

        In some senses it’s worse: they’re making a half assed effort to sElF rEgUlAtE so governments don’t pass laws to limit what they can do.

        This is the menthol cigarette of AI regulation.