Apparently, stealing other people’s work to create product for money is now “fair use” as according to OpenAI because they are “innovating” (stealing). Yeah. Move fast and break things, huh?

“Because copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents—it would be impossible to train today’s leading AI models without using copyrighted materials,” wrote OpenAI in the House of Lords submission.

OpenAI claimed that the authors in that lawsuit “misconceive[d] the scope of copyright, failing to take into account the limitations and exceptions (including fair use) that properly leave room for innovations like the large language models now at the forefront of artificial intelligence.”

  • qyron@sopuli.xyz
    link
    fedilink
    arrow-up
    0
    ·
    10 months ago

    If it is impossible, either shut down operations or find a way to pay for it.

    • webghost0101@sopuli.xyz
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      10 months ago

      My concern is they and other tech companies absolutely can and would pay if they have no choice. Paying fines for illegal practices if needs be.

      What absolutely wont survive a strong law to keep copyright content out of ai is the open source community which absolutely can not pay for such a thing and would be seriously lacking behind if its excluded, Strengthen the monopoly on ai by for Profit Tech. So basically this issue can have huge ramifications no matter what we end up doing.

      • frog 🐸@beehaw.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        10 months ago

        My understanding of the open source community is that taking copyrighted content from people who haven’t willingly signed onto the project would kind of undermine the principles of the movement. I would never feel comfortable using open source software if I had knowledge that part or all of it came from people who hadn’t actively chosen to contribute to it.

        I have seen a couple of things recently about AI models that were trained exclusively on public domain and creative commons content which apparently are producing viable content, though. The open source community could definitely use a model like that, and develop it further with more content that was ethically obtained. In the long run, there may be artists that willingly contribute to it, especially those who use open source software themselves (eg GIMP, Blender, etc). Paying it forward, kind of thing.

        The problem right now is that artists have no reason to be generous with an open source alternative to AIs, when their rights have already been stomped on and certain people in the open source community are basically saying “if we can’t steal from artists too, then we can’t compete with the corporations.” So there’s literally a trust issue between the creative and tech industries that would need to be resolved before any artists would consider offering art to an open source AI.

        • webghost0101@sopuli.xyz
          link
          fedilink
          arrow-up
          0
          ·
          10 months ago

          Its quite a mess but I definitely agree that open source needs a good model trained on consented works.

          I do fear though that the quality gap between copyright trained and purist models will be huge in the first decenia. And no matter the law, the tech is out there and corporation and criminals will be using it in secret nonetheless.

          If only things where as simple as choosing for the chad digital artists. Digital art was part of my higher education and if i Haden t get a tech job i might have been one of them so i feel torn between the divide in industries.

          This may sound doomer but since the technology exist we are in a race to obtain beyond human super intelligence and we do not know what will happen after that.

          OpenAI had multiple times stated they don’t know if copyright will still mean anything in a future with ai.

          We are also facing some huge global issues like global warming where a super intelligence could be the answer to sustain the planet, of course also risking evil ai in the process… i repeat such a mess

          I don’t fully trust sam altman, but i do believe what they say may be true. At some point its going to be here and it will be to smart to ignore.

          Its optimistically possible that in 20 years we will all be leisurely artist laughing at the idea of needing to work to earn survival.

          Its of course just as likely some statehead old bastard presses the deathbutton next week and thats the end of all of it or that climate has progressed beyond what our smartest future ai could possible solve.

          • frog 🐸@beehaw.org
            link
            fedilink
            English
            arrow-up
            0
            ·
            10 months ago

            I definitely do not have the optimism that in 20 years time we’ll all be leisurely artists. That would require that the tech bros who create the AIs that displace humans are then sufficiently taxed to pay UBI for all the humans that no longer have jobs - and I don’t see that happening as long as they’re able to convince governments not to tax, regulate, or control them, because doing so will make it impossible for them to save the planet from climate change, even as their servers burn through more electricity (and thus resources) than entire countries. Tech bros aren’t going to save us, and the only reason they claim they will is so they never face any consequences of their behaviour. I don’t trust Sam Altman, or any of his ilk, any further than I can throw them.