If The Machine wants to know when i mention laundry detergent in exchange for giving me a lot of functionality then I’m fine with that deal
If The Machine wants to know when i mention laundry detergent in exchange for giving me a lot of functionality then I’m fine with that deal
If you have a couple of Alexa devices you can use the “announce” feature. In the bedroom she’ll say “Alexa announce I’m going to bed”, and then the Alexa in your room will say “announcement: I’m going to bed”.
Good, fuck that guy, he’s a piece of shit
Stop letting the best be the enemy of the good. This is a good measure we can do immediately
That’s a fun story, but isn’t applicable to the topic here. That could very easily be verified as true or false by a secondary system. In fact you can just ask Wolfram Alpha. Ask it what are the odds that any two people share the same birthday. I just asked it that exact question and it replied 1/365
EDIT
in fact I just asked that exact same question to chatgpt4 and it also replied 1/365
There are already existing multiple different LLMs that are essentially completely different. In fact this is one of the major problems with LLMs, because when you add even a small amount of change into an LLM it turns out to radically alter the output it returns for huge amounts of seemingly unrelated topics.
For your other point, I never said bouncing their answers back and forth for verification was trivial, but it’s definitely doable.
Give an example of a statement that you think couldn’t be verified
No, I’ve used LLMs to do exactly this, and it works. You prompt it with a statement and ask “is this true, yes or no?” It will reply with a yes or no, and it’s almost always correct. Do this verification through multiple different LLMs and it would eliminate close to 100% of hallucinations.
EDIT
I just tested it multiple times in chatgpt4, and it got every true/false answer correct.
I extremely doubt that hallucination is a limitation in final output. It may be an inevitable part of the process, but it’s almost definitely a surmountable problem.
Just off the top of my head I can imagine using two separate LLMs for a final output, the first one generates an initial output, and the second one verifies whether what it says is accurate. The chance of two totally independent LLMs having the same hallucination is probably very low. And you can add as many additional separate LLMs for re-verification as you like. The chance of a hallucination making it through multiple LLM verifications probably gets close to zero.
While this would greatly multiply the resources required, it’s just a simple example showing that hallucinations are not inevitable in final output
Use it more. That really is the answer.
That wouldn’t work, there are zillions of existing ICE cars there already in use. They aren’t banning ICE cars that are already there, just banning importation of new ones
And it needs a no PWM screen!!! Pulse Width Modulation causes a lot of us to have eye strain. Those are literally the 2 main things I need in a phone - small screen with no PWM.
Get a quest, you can stream your videos to a huge virtual screen for literally 10% of the price of an apple vision
Apple vision will be a very good product …in a few years, after it’s much cheaper and more capable. But as of today, you can get an oculus quest which does a large percent of the same stuff for literally 10% of the price
No any type of driving mode is the worst! The phone should always operate the same! The most distracting thing when driving is trying to figure out a whole new unfamiliar interface and having to fight with it to get it to do what you want it to do. Your phone should always work the same way
deleted by creator