Skip to main content

A Vision of AI for Joyful Education

Here’s how we can avert the dangers and maximize the benefits of this powerful but still emerging technology

Credit:

Getty Images

This article was published in Scientific American’s former blog network and reflects the views of the author, not necessarily those of Scientific American


In a 2013 post, Facebook CEO Mark Zuckerberg sketched out a “rough plan” to provide free, basic internet to the world and thus spread opportunity and interconnection. However, the United Nations Human Rights Council reported that, in Myanmar, Facebook’s efforts to follow through on such aspirations accelerated hate speech, fomented division, and incited offline violence in the Rohingya genocide. Free, basic internet now serves as a warning of the complexities of technological impact on society. For Chris, an AI researcher in education, and Lisa, a science educator and student of international cyber policy, this example gives pause: What unintended consequences could AI in education have?

Many look to AI-powered tools to address the need to scale high-quality education and with good reason. A surge in educational content from online courses, expanded access to digital devices, and the contemporary renaissance in AI seem to provide the pieces necessary to deliver personalized learning at scale. However, technology has a poor track record for solving social issues without creating unintended harm. What negative effects can we predict, and how can we refine the objectives of AI researchers to account for such unintended consequences?

For decades the holy grail of AI for education has been the creation of an autonomous tutor: an algorithm that can monitor students’ progress, understand what they know and what motivates them, and provide an optimal, adaptive learning experience. With access to an autonomous tutor, students can learn from home, anywhere in the world. However, autonomous tutors of 2020 look quite different from this ideal. Education with auto-tutors usually engages students with problems designed to be easy for the algorithm to interpret—as opposed to joyful for the learner.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


Current algorithms can’t read motivation, and are far from engendering long-term learning gains, instead focusing on engaging students for the short term. The technical challenges are enormous: building the ideal auto-tutor could be as hard as reaching true general AI. The research community has seen this as a challenge: we simply need to overcome our technical shortcomings to achieve the utopian dream.

But is the auto-tutor utopia a dream worth building toward? We offer some dangers that arise from use of artificially intelligent systems such as auto-tutors and call for research into approaches that harness the potential good from application of AI in education, while mitigating the risks. We believe our vision of thoughtfully developed AI systems working in tandem with naturally intelligent humans can support a broad community of learners around the world.

THREE DANGERS OF INTEGRATING AI INTO EDUCATION

1. Undermining socioemotional connections and skills. Students go to school for many reasons outside of rote knowledge acquisition, including development of socioemotional skills, human mentorship and human community. For all the potential inadequacies of human teachers and traditional classes, displacing these structures has costs. Many of us remember learning from teachers whose mentorship and guidance extended far beyond the subject they were charged with teaching. Might AI displace these interactions?

Furthermore, loneliness is on the rise, with younger generations lonelier than older generations. One study found a relationship between depression among adolescents and screen time, compared to youth who spent time on offscreen activities such as in-person social interactions, sports or homework. Decreased screen time could lead to significant gains in empathy levels. As UNESCO considers reorienting goals of education to emphasize development of socioemotional competencies that allow for peaceful and sustainable societies, compelling children toward screens may undermine those goals.

Yet AI systems tend to be designed to maximize the time students spend online. Even executives who develop addictive technologies understand these risks, as many send their children to expensive screen-free private schools for “the luxury of human interaction,” while poorer students are pushed toward cheap technological solutions. Beyond that, learning modules powered by AI could undermine vital meta-learning skills such as the ability to self-regulate, as students might adapt to machines doing the work of regulating their attention and fail to cultivate their own capacity to do so. Likewise, students might lose the ability to adapt independently to creative tasks in the real world that do not provide immediate feedback or guidance.

Meanwhile, the disruption AI introduces in the classroom could extend into homes and communities. Authority figures including teachers and parents may not adapt easily to a curriculum ported entirely onto digital devices, no matter how much “humanity” such technologies display. This resistance could be stiffest in traditional communities living in poverty, where some see the greatest potential impact from AI technologies, as families unaccustomed to children spending time on screens may resist the shift from human mentorship to AI tutors.

The trade-off between increased knowledge acquisition in exchange for less human-led learning is especially negative if it turns out that the tools are not as good at improving knowledge as we hoped. Given the vital role that motivation plays in learning and the technical challenges we have yet to overcome, that is a distinct possibility.

2. Misuse of AI in education to extend power. We must also consider the possibility that malevolent actors will harness newly powerful and motivating educational tools to teach violent subject matter. In the same way that Facebook’s rise amplified both destructive and democratic organizing, newly effective teaching tools could help terrorists scale trainings about acts of destruction. Moreover, the goal of developing humanlike empathy in AI-tutors will require processing deeply personal data on the learners’ emotional and psychological states. Will oppressive governments use troves of psychoemotional data on citizens from the time they are schoolchildren for persecution or power consolidation? Or, seemingly more benign: will the rich just get richer?

While advocates tout the potential for AI-enabled educational tools to democratize education globally, researchers must consider how these tools can perpetuate or increase inequality. Privileged groups with access to digital tools become the source of training data for current AI algorithms. When machine-learning algorithms train on a certain data set—perhaps one in which white students from the United States are overrepresented—the result might be biased against groups from other backgrounds and therefore might be ineffective or even discriminatory when used on a different group.

Furthermore, scaling education by means of a centralized model reduces the number of voices who decide what is taught. Given that teaching is also a good learning tool for those who teach, this choice of who gets to teach becomes a choice of who gets to learn. Yes, providing new tools in communities that barely have access to textbooks will offer access to one form of knowledge. However, scaling centrally advocated curricula can enforce homogenous learning goals and disempower local knowledge-providers. As Taskeen Adam put it, “As technology penetrates communities globally, so do neoliberal values, to the extent that these become the value system, where local, cultural or religious values are given second place, if they fit in at all.” Failing to take into account local voices will render technologies irrelevant, or even useless, to communities where education gaps are the greatest, which may reject the technologies altogether, thus increasing the inequality they seek to correct.

3. Infringement on children’s rights: lack of data privacy and cyber security. The challenge of how to use vast amounts of personal data for personalized learning while protecting individual privacy and preferences rises to dangerous levels when AI targets young learners who cannot yet consent to collection of their personal data, and learners of highest need who may not understand the risks of sharing their personal data or interacting with anonymous strangers online. Platforms that connect learners around the world are no utopia to those who spend time moderating online interactions. “Everywhere there is online exchange and children, there is child exploitation,” says Alex Stamos, Facebook’s former chief security officer.

Security researchers colloquially call this the “Lego penis problem,” referring to a Lego MMO (massively multiplayer online game) that allowed users to create and share Lego structures. Users started building penises from Legos and shared these with other players. Workers at the company lamented their inability to “detect dongs” at a rate fast enough to prevent the sullying of their child-friendly brand. Ultimately, content moderation that remained compliant with the U.S. Children’s Online Privacy Protection Act proved too expensive and may have played a role in the game’s shutdown.

According to Stamos, every platform from Lego to Roblox to Fortnite experiences various types of child exploitation, and exploitation will likely be exacerbated in communities less familiar with the internet. To fulfill the promise of online education supporting least developed countries, researchers must account for security of new users of online tools against malicious actors expert at taking advantage of even the most sophisticated users.

TOWARDS A BETTER VISION

The presence of risks does not need to remove our optimism; instead it can be a force for developing a more mature goal. Many have painted a vision of a better education system. We call on the AI research community and the education policy community to collectively imagine a set of grand challenges in AiEd that better align with our dreams for education and better take into account risks. We offer six goals for discussion, and invite the public to join the conversation.

We will crowdsource new ideas here (giving credit to authors):

1. AI to facilitate more (and higher quality) human learning interactions. AI could be developed to support educators and the education system through automation of tasks and development of exciting problems, rather than by replacing teachers. Teachers and tools can work in tandem: with teachers filtering useful suggestions from AIs, and tools supporting teachers in grading and tracking students.

2. AI to generate inspiring problems. AI can also help with the creation and dissemination of interesting problems in local contexts. Machine learning can create paintings that look like works of Rembrandt. Can we also use algorithms to create engaging, personalized activities? This synthesis could establish a rich ecosystem of teaching and learning with social and emotional interactions bolstered, not replaced by, technology.

3. Low-data feedback. Inspiring, open-ended assignments provide opportunities for exploration and creativity. AI can assist teachers if designed to support and provide feedback for these sorts of questions, however, current methodologies require data sets of huge numbers of students to enable meaningful AI-driven feedback on open-ended work. We need to learn to achieve this without consuming massive amounts of student data. Low-data AI seems promising and lowers risks of data abuse. Low-data is practical in that solutions won’t have to wait for millions of students to act as guinea pigs on each assignment to train our algorithms.

4. AI to understand process. AI currently is most effective in teaching rote, structured lessons rather than supporting the creative, open-ended, team-based learning necessary to flourish in the modern world. A current line of research focuses on understanding process and ability to learn rather than merely the final product. It is impossible for a teacher—especially in a large classroom—to keep tabs on every student in the way a piano teacher provides feedback on a student’s hand positions. AI-powered tools could work in tandem with teachers to monitor process, while the teacher remains the coach, averting the dismantling of schooling environments that allow for socioemotional learning. A hybrid model of artificial and human intelligence could help teachers gain insights about student process, elevating the teacher as a coach and mentor.

5. AI for translation of educational content. Distribution of content has been uneven, especially for low-resource languages. Natural language processing, a branch of AI, is well-positioned to support translation that could help provide inclusive education in locations that need it most and encourage development of educational technology built by communities from the ground up rather than imposed by the West.

6. AI-powered risk detection for child safety. A healthy amount of energy should be put into developing content moderation tools that scale, so online spaces of learning are safe for all learners (e.g. from malicious images), especially children and those in vulnerable contexts.

AI has great potential to further joyful learning, but only if the concerns discussed are appropriately addressed. Jointly, the education policy community and researchers developing AI tools to be deployed on a massive scale have a serious responsibility. We must collectively consider the range of possible applications of our technologies, including harmful ones. We invite the public to join in the conversation about these AI challenges and how to mitigate their potential harms and promote human flourishing.

Education AI may seem like an unmitigated good; however, in this field, as in medicine, the Hippocratic oath applies: first, do no harm.

Chris Piech is an assistant professor of computer science at Stanford University. He was raised in Kenya and Malaysia. His research uses machine learning to understand human learning.

More by Chris Piech

Lisa Einstein was a Peace Corps volunteer in Guinea from 2016 to 2018. Her career since returning from the Peace Corps has focused on expanding access to impactful digital tools and mitigating harms from emerging technologies.

More by Lisa Einstein