• 0 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 21st, 2023

help-circle









  • I extremely doubt that hallucination is a limitation in final output. It may be an inevitable part of the process, but it’s almost definitely a surmountable problem.

    Just off the top of my head I can imagine using two separate LLMs for a final output, the first one generates an initial output, and the second one verifies whether what it says is accurate. The chance of two totally independent LLMs having the same hallucination is probably very low. And you can add as many additional separate LLMs for re-verification as you like. The chance of a hallucination making it through multiple LLM verifications probably gets close to zero.

    While this would greatly multiply the resources required, it’s just a simple example showing that hallucinations are not inevitable in final output