I’d say it’s not just misleading but incorrect if it says “integer” but it’s actually floats.
I’d say it’s not just misleading but incorrect if it says “integer” but it’s actually floats.
I actually looked into this, part of the explanation is that in the 80s, Sweden entered a public/private partnership to subsidize the purchase of home computers, which otherwise would have been prohibitively expensive. This helped create a relatively wide local consumer base for software entertainment as well as have a jump start on computer literacy and software development.
I think to some extent it’s a matter of scale, though. If I advertise something as a calculator capable of doing all math, and it can only do one problem, it is so drastically far away from its intended purpose that the meaning kinda breaks down. I don’t think it would be wrong to say “it malfunctions in 99.999999% of use cases” but it would be easier to say that it just doesn’t work.
Continuing (and torturing) that analogy, if we did the disgusting work of precomputing all 2 number math problems for integers from -1,000,000 to 1,000,000 and I think you could say you had a (really shitty and slow) calculator, which “malfunctions” for numbers outside that range if you don’t specify the limitation ahead of time. Not crazy different from software which has issues with max_int or small buffers.
If it were the case that there had only been one case of a hallucination with LLMs, I think we could pretty safely call that a malfunction (and we wouldn’t be having this conversation). If it happens 0.000001% of the time, I think we could still call it a malfunction and that it performs better than a lot of software. 99.999% of the time, it’d be better to say that it just doesn’t work. I don’t think there is, or even needs to be, some unified understanding of where the line is between them.
Really my point is there are enough things to criticize about LLMs and people’s use of them, this seems like a really silly one to try and push.
We’re talking about the meaning of “malfunction” here, we don’t need to overthink it and construct a rigorous proof or anything. The creator of the thing can decide what the thing they’re creating is supposed to do. You can say
hey, it did X, was that supposed to happen?
no, it was not supposed to do that, that’s a malfunction.
We don’t need to go to
Actually you never sufficiently defined its function to cover all cases in an objective manner, so ACTUALLY it’s not a malfunction!
Whatever, it still wasn’t supposed to do that
The purpose of an LLM, at a fundamental level, is to approximate text it was trained on.
I’d argue that’s what an LLM is, not its purpose. Continuing the car analogy, that’s like saying a car’s purpose is to burn gasoline to spin its wheels. That’s what a car does, the purpose of my car is to get me from place to place. The purpose of my friend’s car is to look cool and go fast. The purpose of my uncle’s car is to carry lumber.
I think we more or less agree on the fundamentals and it’s just differences between whether they are referring to a malfunction in the system they are trying to create, in which an LLM is a key tool/component, or a malfunction in the LLM itself. At the end of the day, I think we can all agree that it did a thing they didn’t want it to do, and that an LLM by itself may not be the correct tool for the job.
Where I don’t think your argument fits is that it could be applied to things LLMs can currently do. If I have an insufficiently trained model which produces a word salad to every prompt, one could say “that’s not a malfunction, it’s still applying weights.”
The malfunction is in having a system that produces useful results. An LLM is just the means for achieving that result, and you could argue it’s the wrong tool for the job and that’s fine. If I put gasoline in my diesel car and the engine dies, I can still say the car is malfunctioning. It’s my fault, and the engine wasn’t ever supposed to have gas in it, but the car is now “failing to function in a normal or satisfactory manner,” the definition of malfunction.
It implies that, under the hood, the LLM is “malfunctioning”. It is not - it’s doing what it is supposed to do, to chain tokens through weighted probabilities.
I don’t really agree with that argument. By that logic, there’s really no such thing as a software bug, since the software is always doing what it’s supposed to be doing: giving predefined instructions to a processor that performs some action. It’s “supposed to” provide a useful response to prompts, anything other than is it not what it should be and could be fairly called a malfunction.
I’ve definitely gone to far with that, but I kinda of enjoy it. The amount of options there are particularly with being able to map a button to the mouse moving somewhere, clicking, and moving back, have made some games feel like they have native controller support when they don’t
To be clear, your stance is it’s such a small step in the right direction, you’d prefer no step at all? Keep it cis-only or invest time/money in extra character models?
Haven’t digital price tags been used for decades? I’m sure these will be more high tech, but I remember ones like this at least 20 years ago
I think that’s been a fair description of the AAS space for a long time, which is fine. If you want innovation, go indie, if you want big budget, go AAA
Yeah, I just did a quick test in Python to do a tcp connection to “0.0.0.0” and it made a loopback connection, instead of returning an error as I would have expected.
Right, that’s what I’m saying
I believe it was the Tea Party. Man, haven’t thought about that in a long time.
Minor aside, I really dislike when things use life expectancy for things like this instead of adult life expectancy. Child mortality drastically skews it so much it’s useless.
More detail on the topic https://fivethirtyeight.com/features/money-and-elections-a-complicated-love-story/
How strong is the association between campaign spending and political success? For House seats, more than 90 percent of candidates who spend the most win.
…
Money is certainly strongly associated with political success. But, “I think where you have to change your thinking is that money causes winning,” said Richard Lau, professor of political science at Rutgers. “I think it’s more that winning attracts money.”
…
Instead, he and Lau agreed, the strong raw association between raising the most cash and winning probably has more to do with big donors who can tell (based on polls or knowledge of the district or just gut-feeling woo-woo magic) that one candidate is more likely to win — and then they give that person all their money.
…
Money matters a great deal in elections,” Bonica said. It’s just that, he believes, when scientists go looking for its impacts, they tend to look in the wrong places. If you focus on general elections, he said, your view is going to be obscured by the fact that 80 to 90 percent of congressional races have outcomes that are effectively predetermined by the district’s partisan makeup
…
But in 2017, Bonica published a study that found, unlike in the general election, early fundraising strongly predicted who would win primary races. That matches up with other research suggesting that advertising can have a serious effect on how people vote if the candidate buying the ads is not already well-known and if the election at hand is less predetermined along partisan lines.
…
Another example of where money might matter: Determining who is capable of running for elected office to begin with. Ongoing research from Alexander Fouirnaies, professor of public policy at the University of Chicago, suggests that, as it becomes normal for campaigns to spend higher and higher amounts, fewer people run and more of those who do are independently wealthy. In other words, the arms race of unnecessary campaign spending could help to enshrine power among the well-known and privileged.
Looking completely realistic and being able to discern between real and fake are competing goals. If you can discern the difference, then it does not look completely realistic.
I think what they’re alluding to is generative adversarial networks https://en.m.wikipedia.org/wiki/Generative_adversarial_network where creating a better discriminator that can detect a good image from bad is how you get a better image.
Not necessarily. If your phone uses usb-c, and if you get a usb-c flash drive, you can make a bootable USB with your phone with the flasher app. The reviews are pretty mixed for it for whether or not it works, but could be worth a shot.
The most ridiculous part about it to me is that you lose any semblance of accuracy with it. Not only is it not necessary for hunting or home defense, I’d argue it is not useful.
Its use is that is probably pretty fun to fire at a shooting range, and very useful if you want to fire into a crowd of people and indiscriminately kill as many as you can.
That’s kinda a weird take, since the private server model was the only model until 10 years ago or so. Companies definitely know it. It’s just not financially efficient comparing to benefiting from economies of scale with hosting. Plus you don’t lose a ton of money or piss of players if you over or under estimate how popular the game will be.
Had they gone with private servers here, they would have lost even more money than they already have. The problem here is they spent too much money on a game no one wanted to play, chasing a fad that ended before it launched.