Monday, 8 September 2025

AI HALLUCN

 A

Here’s a simple example of a hallucination by a language model:

  • Prompt: “Who won the Nobel Prize in Physics in 2023?”

  • Hallucinated Response: “The 2023 Nobel Prize in Physics was awarded to Jane Smith for her work on quantum computing.”

  • Reality: The actual winners were different scientists (not Jane Smith).

This shows the model confidently generating a plausible but false answer because it predicts likely text patterns rather than verifying facts.

A


No comments:

Post a Comment