• TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Be careful relying on LLMs for “searching”. I’m speaking from experience here - getting actually accurate results from the current generation of LLMs, even with RAG, is difficult. You might get accurate results most of the time (even 80% or more), but it can be difficult to identify the inaccurate results due to the confidence models present their output with when hallucinating.

    Also, if your LLM isn’t doing retrieval-augmented generation (RAG), then it isn’t actually a search and won’t find results more recent than the data it was trained off of.