What are the best practices for using AI in education?


Best practics of using AI
The primary best practice in incorporating AI into education is to prioritize the roles of students, educators, and parents. Unlike many other domains where AI is applied, education focuses more on creating a social platform for transforming lives, making it crucial to place humans at the forefront. The AI policy report released by the US Department of Education in 2023 underscores this by highlighting the importance of the "people" element and advocates for using AI with "humans in the loop" (DoE Policy Report, 2023).

However, if humans are to remain central, we should consider using AI not with humans in the loop but with "AI in the loop." While "humans in the loop" implies that humans play a crucial role in AI-assisted decision-making, it could also suggest that AI makes decisions with humans providing feedback only, which may be more suitable for industrial applications. In education, we believe AI should be in the loop, not humans. In essence, AI should serve as our assistant, providing continuous feedback, rather than our supervisor.

The seven recommendations for education leaders outlined in the same policy report can also serve as best practices for using AI in education:

1. Emphasize Humans in the Loop
A technology-enhanced future is more like an electric bike and less like robot vacuums. On an electric bike, the human is fully aware and fully in control, but their burden is less, and their effort is multiplied by a complementary technological enhancement. Robot vacuums do their job, freeing the human from involvement or oversight.

2. Align AI Models to a Shared Vision for Education 
AI models can often be wrong. These tools should be used according to our educational priorities. The romance of technology can lead to a “let’s see what the tech can do'' attitude, which can weaken the focus on goals and cause us to adopt models that fit our priorities poorly.

3. Design Using Modern Learning Principles
AI tools should be designed based on the best and most current principles of teaching and learning. Many systems focus on what is wrong with a student and choose pre-existing learning resources that might fix that weakness. Going forward, we must harness AI’s ability to sense and build upon learner strengths.

4. Prioritize Strengthening Trust 
Distrust in edtech and AI is commonplace. Many people already distrust AI for a variety of reasons. In this context, AI tools should aim to strengthen trust. We should reject any AI that envisions replacing teachers. AI should aim to support teachers, not replace them. 

5. Inform and Involve Educators 
The use of AI has the potential to result in less respect for educators or less value for their skills. Now is the time to show the respect and value we hold for educators by informing and involving them in every step of the process of designing, developing, testing, improving, adopting, and managing AI-enabled edtech. 

6. Focus R&D on Addressing Context and Enhancing Trust and Safety 
Innovators should focus their efforts to advance AI on the long tail of learning variability, where large populations of students would benefit from the customization of learning. R&D must take the lead in making AI models more context-sensitive and ensuring that they are effective, safe, and trustworthy for use with varied learners in diverse settings. 

7. Develop Education-Specific Guidelines and Guardrails 
As new situations arise in the use of AI-enabled learning technologies, regulations, and laws related to key student and family data privacy such as FERPA, CIPA, COPPA, and IDEA may need to be reviewed in light of new and emerging technologies. Leaders at every level need to be aware of privacy and security implications and prepare to effectively confront the next level of issues.

Image credits: Subodh Dahal (not AI)

Comments

All Articles