• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: June 4th, 2023

help-circle

  • It’s also the anti commodity stuff IP has been allowing. If Hershey makes crap chocolate, there is little stopping you from buying Lidnt say. But if Microsoft makes a bad OS, there’s a lot stopping you from using Linux or whatever.

    What’s worse is stuff like DRM and computers getting into equipment that otherwise you could use any of a bevy of products for. Think ink cartridges.

    Then there’s the secret formulas like for transmission fluid now where say Honda says in the manual you have to get Honda fluid for it to keep working. Idk if it’s actually true, but I l’m loathe to do the 8k USD experiment with my transmission.

    You’d think the government could mandate standards but we don’t have stuff like that.


  • I think it’s very clear that this “stochastic parrot” idea is less and less accepted by researchers and philosophers, maybe only in the podcasts I listen to…

    It’s not capable of knowledge in the sense that humans are. All it does is probabilistically predict which sequence of words might best respond to a prompt

    I think we need to be careful thinking we understand what human knowledge is and our understanding of the connotations if the word “sense” there. If you mean GPT4 doesn’t have knowledge like humans have like a car doesn’t have motion like a human does then I think we agree. But if you mean that GPT4 cannot reason and access and present information - that’s just false on the face of just using the tool IMO.

    It’s also untrue that it’s predicting words, it’s using tokens, which are more like concepts than words, so I’d argue already closer to humans. To the extent it is just predicting stuff, it really calls into question the value of most of the school essays it writes so well now…


  • Well, LLMs can and do provide feedback about confidence intervals in colloquial terms. I would think one thing we could do is have some idea of how good the training data is in a given situation - LLMs already seem to know they aren’t up to date and only know stuff to a certain date. I don’t see why this could not be expanded so they’d say something much like many humans would - i.e. I think bla bla but I only know very little about this topic. Or I haven’t actually heard about this topic, my hunch would be bla bla.

    Presumably like it was said, other models with different data might have a stronger sense of certainty if their data covers the topic better, and the multi cycle would be useful there.