Manipulated videos of political leaders, websites containing inaccurate medical advice, and tales of celebrity death hoaxes are examples of GenAI-created falsehoods. While misinformation has been persistent in digital platforms for some time, the introduction of generative technologies makes both intentional and unintentional creation of "believable" stories easier, amplifying its spread. Widely-spread misinformation can manipulate public opinion, polarize society, and even influence election outcomes, over time eroding citizens' faith in institutions and undermining democracy.
Why does GenAI produce misinformation?
If the data used to train a GenAI model contains misinformation or biased content, its output will reflect the false information. Moreover, when confronted with incomplete or contradictory input, GenAI models resort to probabilities to fill in gaps that are often creative but lack any factual basis. This phenomenon is commonly known as "hallucination" - although many prefer the term "error" which uncouples it from the medical definition and humanization of the technology.
As with bias, misinformation generated by AI can perpetuate biases and stereotypes or reinforce societal prejudices.
GenAI-generated curricular materials can contain general inaccuracies which, if undetected by teachers, can mislead students and distort learning.
Misinformation can plague student research efforts if they are not taught to use GenAI effectively, including evaluating and verifying output.