• 0 Posts
  • 14 Comments
Joined 11 months ago
cake
Cake day: June 18th, 2023

help-circle




  • Neither is an LLM. What you’re describing is a primitive Markov chain.

    My description might’ve been indicative of a Markov chain but the actual framework uses matrices because you need to be able to store and compute a huge amount of information at once which is what matrices are good for. Used in animation if you didn’t know.

    What it actually uses is irrelevant, how it uses those things is the same as a regression model, the difference is scale. A regression model looks at how related variables are in giving an outcome and computing weights to give you the best outcome. This was the machine learning boom a couple of years ago and TensorFlow became really popular.

    LLMs are an evolution of the same idea. I’m not saying it’s not impressive because it’s very cool what they were able to do. What I take issue with is the branding, the marketing and the plagiarism. I happen to be in the intersection of working in the same field, an avid fan of classic Sci-Fi and a writer.

    It’s easy to look at what people have created throughout history and think “this looks like that” and on a point by point basis you’d be correct but the creation of that thing is shaped by the lens of the person creating it. Someone might make a George Carlin joke that we’ve heard recently but we’ll read about it in newspapers from 200 years ago. Did George Carlin steal the idea? No. Was he aware of that information? I don’t know. But Carlin regularly calls upon his own experiences so it’s likely that he’s referencing a event from his past that is similar to that of 200 years ago. He might’ve subconsciously absorbed the information.

    The point is that the way these models have been trained is unethical. They used material they had no license to use and they’ve admitted that it couldn’t work as well as it does without stealing other people’s work. I don’t think they’re taking the position that it’s intelligent because from the beginning that was a marketing ploy. They’re taking the position that they should be allowed to use the data they stole because there was no other way.




  • So as a data analyst a lot of my work is done through a computer but I can apply my same skills if someone hands me a piece of paper with data printed on it and told me to come up with solutions to the problems with it. I don’t need the computer to do what I need to do, it makes it easier to manipulate data but the degree of problem solving required needs to be done by a human and that’s why it’s my job. If a machine could do it, then they would be doing it but they aren’t because contrary to what people believe about data analysis, you have to be somewhat creative to do it well.

    Crafting a prompt is an exercise in trial and error. It’s work but it’s not skilled work. It doesn’t take talent or practice to do. Despite the prompt, you are still at the mercy of the machine.

    Even by the case you’ve presented, I have to ask, at what point of a human editing the output of a generative model constitutes it being your own work and not the machine’s? How much do you have to change? Can you give me a %?

    Machines were intended to automate the tedious tasks that we all have to suffer to free up our brains for more engaging things which might include creative pursuits. Automation exists to make your life easier, not to rob you of life’s pursuits or your livelihood. It never should’ve been used to produce creative work and I find the attempts to equate this abomination’s outputs to what artists have been doing for years, utterly deplorable.



  • Exactly! You can glean so much from a single work, not just about the work itself but who created it and what ideas were they trying to express and what does that tell us about the world they live in and how they see that world.

    This doesn’t even touch the fact that I’m learning to draw not by looking at other drawings but what exactly I’m trying to draw. I know at a base level, a drawing is a series of shapes made by hand whether it’s through a digital medium or traditional pen/pencil and paper. But the skill isn’t being able replicate other drawings, it’s being able to convert something I can see into a drawing. If I’m drawing someone sitting in a wheelchair, then I’ll get the pose of them sitting in the wheelchair but I can add details I want to emphasise or remove details I don’t want. There’s so much that goes into creative work and I’m tired of arguing with people who have no idea what it takes to produce creative works.


  • You say that yet I initially responded to someone who was comparing an LLM to what a comedian does.

    There is no unique method because there’s hardly anything unique you can do. Two people using Stable Diffusion to produce an image are putting in the same amount of work. One might put more time into crafting the right prompt but that’s not work you’re doing.

    If 90% of the work is handled by the model, and you just layer on whatever extra thing you wanted, that doesn’t mean you created the thing. That also implies you have much control over the output. You’re effectively negotiating with this machine to produce what you want.


  • Yeah but the difference is we still choose our words. We can still alter sentences on the fly. I can think of a sentence and understand verbs go after the subject but I still have the cognition to alter the sentence to have the effect I want. The thing lacking in LLMs is intent and I’m yet to see anyone tell me why a generative model decides to have more than 6 fingers. As humans we know hands generally have five fingers and there’s a group of people who don’t so unless we wanted to draw a person with a different number of fingers, we could. A generative art model can’t help itself from drawing multiple fingers because all it understands is that “finger + finger = hand” but it has no concept on when to stop.


  • A comedian isn’t forming a sentence based on what the most probable word is going to appear after the previous one. This is such a bullshit argument that reduces human competency to “monkey see thing to draw thing” and completely overlooks the craft and intent behind creative works. Do you know why ChatGPT uses certain words over others? Probability. It decided as a result of its training that one word would appear after the previous in certain contexts. It absolutely doesn’t take into account things like “maybe this word would be better here because the sound and syllables maintains the flow of the sentence”.

    Baffling takes from people who don’t know what they’re talking about.