The large language Model (LLM) is an advanced dialogue system based on artificial intelligence, which can answer user queries and generate text according to human instructions.
With the emergence of the high-performance model ChatGPT developed by OpenAI, these models have become more and more popular, and more and more companies have begun to invest and develop them.
Despite its promise to answer human questions in real time and to create text for specific purposes, LLM sometimes generates meaningless, inaccurate, or irrelevant text that is different from the prompts provided to them by human users This phenomenon is usually related to the limitations of the data used to train the model or errors in its underlying reasoning, which is called LLM “hallucination”.
According to foreign media reports, researchers at the University of Illinois at Urbana-Champaign (University of Illinois Urbana-Champaign) recently launched a framework KnowHalu to detect hallucinations in LLM-generated text.
It is reported that the relevant papers have been uploaded on the arXiv website, saying that they can help improve the reliability of these models and simplify the use of various text generation tasks.
Photo: arXiv, “as LLM progresses, hallucinations become a key obstacle to its wider use in the real world,” said Bo Li, a consultant to the project.
Although a large number of studies have solved the illusion of LLM, existing methods often fail to make effective use of real-world knowledge or inefficient use.
Inspired by this gap, we develop a novel multi-form knowledge-based hallucination detection framework for LLM.
In addition, we also found that there are gaps in current research on non-hallucination: facts that are correct but unrelated to queries or unspecific responses.
” , return to the first electric network home page >.