Frankly, nobody that’s involved in this fight are looking good to me on either “side.” It’s a fight that shouldn’t be happening at all. This is a game engine. Why is it a battleground for this?
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and then some time on kbin.social.
Frankly, nobody that’s involved in this fight are looking good to me on either “side.” It’s a fight that shouldn’t be happening at all. This is a game engine. Why is it a battleground for this?
Things change. There was a period before this information was easily available; this repository only goes back to 2013. Now there’s a period after this information, too. Things start and eventually they end.
Here’s hoping that some neat new things start up in its place.
Some people are so addicted to anger that they’ll shoot themselves in the foot just so they’ll have something to complain about.
“The gimp” is a character from Pulp Fiction. You’re imagining things and refusing to use a powerful tool in response to that imagined slight.
Looking forward to the “Waymo robotaxis become silent killers stalking the night” headlines once the fix is implemented.
I run tabletop roleplaying adventures and LLMs have proven to be great “brainstorming buddies” when planning them out. I bounce ideas back and forth, flesh them out collaboratively, and have the LLM speak “in character” to give me ideas for what the NPCs would do.
They’re not quite up to running the adventure themselves yet, but it’s an awesome support tool.
It’s impossible to run an AI company “ethically” because “ethics” are such a wibbly-wobbly and subjective thing, and because there are people who simply wish to use it as a weapon on one side of a debate or the other. I’ve seen goalposts shift around quite a lot in arguments over “ethical” AI.
But when you die and an AI company contacts all your grieving friends and family to offer them access to an AI based on you (for a low, low fee!)
You can stop right there, you’re just imagining a scenario that suits your prejudices. Of all the applications for AI that I can imagine that would be better served by a model that is entirely under my control this would be the top of the list.
With that out of the way the rest of your rhetorical questions are moot.
Even with that, being absolutist about this sort of thing is wrong. People undergoing surgery have spent time on heart/lung machines that breathe for them. People sometimes fast for good reasons, or get IV fluids or nutrients provided to them. You don’t see protestors outside of hospitals decrying how humans aren’t meant to be kept alive with such things, though, at least not in most cases (as always there are exceptions, the Terri Schiavo case for example).
If I want to create an AI substitute for myself it is not anyone’s right to tell me I can’t because they don’t think I was meant to do that.
I don’t believe humans are “meant” to do anything. We are a result of evolution, not intentional design. So I believe humans should do whatever they personally want to do in a situation like this.
If you have a loved one who does this and you don’t feel comfortable interacting with their AI version, then don’t interact with their AI version. That’s on you. But don’t belittle them for having preferences different from your own. Different people want different things and deal with death in different ways.
If you don’t want to do it then don’t do it. Can we stop trying to tell everyone else they have to have the same values as you?
If their goal is to prevent AI trainers from scraping their art then an open federated platform is the opposite of what they want.
It also has an expensive back end and no plans for any kind of monetization, so it’s dead in the water from that side too. The moment they’re successful they’re broke.
Honestly, I started laughing my head off when I saw that ragged flap still moving and the Starship still maintaining attitude control with it. That’s the sort of battle damage I expect to see in a science fiction show, I wasn’t expecting SpaceX to bring that sort of thing into the real world too.
If they feel less need to add proper alt-text because peoples’ browsers are doing a better job anyway, I don’t see why that’s a problem. The end result is better alt text.
I would expect it’d be not too hard to expand the context fed into the AI from just the pixels to including adjacent text as well. Multimodal AIs can accept both kinds of input. Might as well start with the basics though.
The Fediverse doesn’t have any defenses against AI impersonators though, aside from irrelevance. If it gets big the same incentives will come into play.
But you’re claiming that there’s already no ladder. Your previous paragraph was about how nobody but the big players can actually start from scratch.
Adding cost only makes the threshold higher. The opposite of the way things should be going.
All this aside from the conceptual flaws of such legislation. You’d be effectively outlawing people from analyzing data that’s publicly available
How? This is a copyright suit.
Yes, and I’m saying that it shouldn’t be. Analyzing data isn’t covered by copyright, only copying data is covered by copyright. Training an AI on data isn’t copying it. Copyright should have no hold here.
Like I said in my last comment, the gathering of the data isn’t in contention. That’s still perfectly legal and anyone can do it. The suit is about the use of that data in a paid product.
That’s the opposite of what copyright is for, though. Copyright is all about who can copy the data. One could try to sue some of these training operations for having made unauthorized copies of stuff, such as the situation with BookCorpus (a collection of ebooks that many LLMs have trained on that is basically pirated). But even in that case the thing that is a copyright violation is not the training of the LLM itself, it’s the distribution of BookCorpus. And one detail of piracy that the big copyright holders don’t like to talk about is that generally speaking downloading pirated material isn’t the illegal part, it’s uploading it, so even there an LLM trainer might be able to use BookCorpus. It’s whoever it is that gave them the copy of BookCorpus that’s in trouble.
Once you have a copy of some data, even if it’s copyrighted, there’s no further restriction on what you can do with that data in the privacy of your own home. You can read it. You can mulch it up and make paper mache sculptures out of it. You can search-and-replace the main character’s name with your own, and insert paragraphs with creepy stuff. Copyright is only concerned with you distributing copies of it. LLM training is not doing that.
If you want to expand copyright in such a way that rights-holders can tell you what analysis you can and cannot subject their works to, that’s a completely new thing and it’s going down a really weird and dark path for IP.
They’re the ones training “base” models. There are a lot of smaller base models floating around these days with open weights that individuals can fine-tune, but they can’t start from scratch.
What legislation like this would do is essentially let the biggest players pull the ladders up behind them - they’ve got their big models trained already, but nobody else will be able to afford to follow in their footsteps. The big established players will be locked in at the top by legal fiat.
All this aside from the conceptual flaws of such legislation. You’d be effectively outlawing people from analyzing data that’s publicly available to anyone with eyes. There’s no basic difference between training an LLM off of a website and indexing it for a search engine, for example. Both of them look at public data and build up a model based on an analysis of it. Neither makes a copy of the data itself, so existing copyright laws don’t prohibit it. People arguing for outlawing LLM training are arguing to dramatically expand the concept of copyright in a dangerous new direction it’s never covered before.
If you divide the sides up to “people who care about this stuff” and “people who just want to make games”, then yeah. One side’s doing okay.