Lianna<p>Okay, here's a question about <a href="https://localization.cafe/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> hallucination:</p><p>Anecdotally speaking, just a small amount of the false information I encounter using LLMs are produced as a direct response to my query.</p><p>There seems to be much higher chance of misinformation when the LLM gives supplemental context around the main query.</p><p>Is this a known phenomenon? If no, I might start a project on the topic.</p><p>(Example in reply. 1/3)</p><p><a href="https://localization.cafe/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://localization.cafe/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://localization.cafe/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://localization.cafe/tags/CompLing" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CompLing</span></a> <a href="https://localization.cafe/tags/ComputationalLinguistics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ComputationalLinguistics</span></a> <a href="https://localization.cafe/tags/Computerlinguistik" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Computerlinguistik</span></a> <a href="https://localization.cafe/tags/Linguistik" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Linguistik</span></a></p>
