What should educators always do when working with AI? What should they never do? Why?
Can you imagine giving someone a license to drive without ensuring they understand the importance of checking blind spots before changing lanes? Or being unaware of the responsibility to yield to pedestrians? These are rules of the road that keep us and others safe. When it comes to AI, it is equally as important to know the rules of safety and ethics. This is an important component of AI literacy which should be ongoing and scaffolded over time. To begin, we recommend starting with the most crucial pitfalls to avoid.
Consider the following:
What information should never be entered into AI apps? (Think: FERPA)
Can AI output always be trusted? What should teachers look for when evaluating AI-generated content?
What responsibility do we have for oversight when using AI?
Additionally:
Even LEAs who decide not to move AI implementation forward should communicate pitfalls to avoid. Teachers will use AI on their own and doing so without any training can put individuals' data and technology systems at risk.
Consider multiple approaches to deliver this training to ensure that all stakeholders receive it.
"Pitfalls to avoid" should be merely a starting point of ongoing AI literacy training.
Lynnette Humphrey, Technology Curriculum Coordinator
Michelle Coots, Manager of Instructional Technology
Chelsey Laningham, Teacher at Aspire (Deer Valley's Online Academy)
Rita Boyd, Assistant Director of Technology
District Ed Tech Leadership Team