Image generated by AI via Wepik/ Divided We Fall

This piece was originally published by Divided We Fall (Mixed). It was written by George Lawton, Technology Journalist, and Julian Bessinger, Impact Lead, The Equity Institute.


Will AI Democratize Education Or Increase Inequity?

By George Lawton – Technology Journalist

The term “artificial intelligence,” as used today, is a bit of a misnomer, suggesting something akin to a synthetic kind of intelligence. AI is now—and always will be—shaped by existing human conversations, the data humans decide to feed it, and the uses we decide are acceptable. Social intelligence is probably a better word for what we are collectively doing, but I will stick to the popular term AI.

AI is valuable in democratizing access to information and guiding learners down unique paths based on their interests, skills, opportunities, and challenges. AI can support teachers by suggesting games, lessons, and experiments based on each student’s strengths and weaknesses. It may also help automate paperwork, allowing teachers to spend more quality time with students. And while the education system’s pandemic-era experiment with Zoom schooling highlighted some disparities in access to digital tools, these discrepancies are starting to erode as the cost of digital tools and AI services fall.  

Education is More Than Just School

It is also important to appreciate that education, as we commonly talk about it, is about more than just learning. It also includes elements of status signaling and childcare as well. Adding AI into education has the potential to democratize or imbalance all of these elements, depending on the choices we make on its use. 

Status signaling in the education system involves ensuring students test well, build impressive resumés, and get hired into prestigious jobs. But what does this mean in a world where ChatGPT can write your essay or improve your resume? Or where much of your coursework is open-source and publicly available? The fundamental problem is that the current education system was based on what made productive citizens during the Industrial Revolution: sitting still and not talking back to the boss. Today’s reality is much different and debates about protecting against illicit AI use must consider this reality.

Additionally, school acts as a form of subsidized child care that frees parents to work and AI’s role in democratizing this should not be neglected. Some people may attempt to use AI to babysit kids or augment teachers. But this is a universally bad idea because it breaks the emotional bonding that is so crucial for becoming fully actualized, kind humans. 

Bringing Benefits to Society 

When used thoughtfully, AI can help us rethink testing and assessment processes to align the learning paths, jobs, and opportunities that maximize our unique skills and value to society. Think about the neurodivergent people, such as those with autism or ADHD, who may not have tested well under the old framework but bring tremendous value to society based on their unique perspectives on the world. Elon Musk comes to mind. 

At this juncture, the most important consideration regarding AI in education lies in improving skills for responsible AI use. Rather than trying to catch kids using AI to do their homework, we should focus on learning how AI-powered systems break, perpetuate bias, and accelerate meanness.


We Need to Navigate Access Gaps and Bias 

By Julian Bessinger – Impact Lead, The Equity Institute

Representing AI as being shaped entirely by humans is a narrow perspective. It is true that AI’s output is conditioned by its input, but the sheer volume of data these systems consume far exceeds any individual human’s capacity to process. An emphasis on the humble inputs of machine learning models should not lead readers to imagine limited outputs. A forest fire can be sparked by a single recklessly discarded match. Likewise, the potential reach of this technology is impossible to predict.

Gaps in Access Remain

The utopian vision of universal accessibility to AI-enabled education overlooks the realities of socioeconomic disparities. As the digital divide continues to widen, the likelihood that AI in education will magnify inequities is impossible to ignore. The idea that technology access gaps are diminishing is shaky. And AI cannot replace human involvement in childcare and teaching due to the irreplaceable value of human connection. 

Brookings recently published a report about the future of the digital divide, pointing out that even when access to technology was relatively ubiquitous, students from wealthier families still performed disproportionately better. This is because no matter the power of digital tools, access to competent, well-trained adults is a necessity for childhood education and development. Access to technology alone cannot make up for disparities in access to those adults. 

As for how those wealth gaps will look in the coming years as AI continues developing, Pew Research reports that the gaps between the richest and poorest Americans continue to grow. Given Mr. Lawton’s location in London, I’ll also cite the global scale at which these effects will be realized, and the enormous wealth gaps that exist between countries and education systems globally.

Bias in AI Cannot Be Ignored

While well-designed AI might help expose systemic inequity, current applications have already been shown to perpetuate existing biases. We must consider the depth and persistence of the white supremacist, ableist, Eurocentric, and patriarchal structures that continue to shape our physical and online worlds. Rooting those systems out of AI will be exactly as thorny as rooting them out of society.

The references to people like Elon Musk as untapped sources of societal value is inaccurate. He is a man who succeeded in the current education system, receiving admission to an Ivy League university. Democratization of opportunity needs to not just amplify the capacities of young white men. It also needs to connect the opportunities of a quickly shifting world to the full global potential of the coming generations of humans. That means leading with equity so that AI in education doesn’t become another vehicle for transferring privilege across generations.

While AI will revolutionize education, it’s essential to maintain a balanced view of the consequences of that revolution. The need for human oversight in the training, deployment, and use of AI in education cannot be overstressed.


AI Has Advantages and Disadvantages

By George Lawton – Technology Journalist

I agree with Mr. Bessinger that as the digital divide continues to grow, there is an increasing likelihood that AI will widen economic disparities as well. And despite progress on notions like universal broadband and access to modern tools, there are still significant gaps. Just within the last week, cable operators pushed back against racial pricing disparity concerns raised by the FCC. Regulatory pushback against existing disparities and appropriate subsidies are pivotal for democratizing education, particularly when it comes to AI. 

I also maintain that AI has the potential to tailor educational curricula for students’ unique knowledge pathways and interests. This will be seminal not just for students from disadvantaged backgrounds but also for those with neurodivergent learning patterns. The estimated 15–20 percent of neurodivergent people could benefit from customized curricula, regardless of their families’ wealth.  

Humans Will Continue to Shape AI

But I don’t believe that representing AI as being shaped by humans is a narrow view. In fact, taking responsibility for the role of how we train and impose AI is essential to making meaningful progress in democratizing education and society. The danger is pretending we can’t be responsible for learning how to use it safely and responsibly.

The most recent crop of generative AI apps was largely trained on human conversations and Internet posts, many of which reflect existing biases and patterns of meanness. Great human care will be required in choosing how we train these new tools, how we identify and label patterns of inequity in the training data, and how we might filter these out.

It feels essential to be circumspect in how we characterize the underlying patterns resulting in past inequity to build consensus and move forward. Polarizing language, while accurate to some, exacerbates tensions across sexual, racial, and religious lines.


We Need To Fix AI Bias

By Julian Bessinger – Impact Lead, The Equity Institute

We are in complete agreement in recognizing the critical role of AI in bridging the educational gap, especially in personalized learning. This technological advancement could represent a significant step towards democratizing education. By tailoring educational experiences to the unique needs and learning styles of each student, AI has the potential to revolutionize education delivery, making it more inclusive and accessible for all. We also agree that the human stewardship of these tools will be crucial to closing access and equity gaps.

It is important to consider how the potential for bias in AI extends beyond just the training process, as you rightly point out. We must also address biases that arise during data collection, model selection, and deployment, as these stages can inadvertently introduce biases with far-reaching consequences. By examining and addressing biases at each stage of the AI development process, we can work toward creating fair and unbiased AI systems that truly serve the diverse needs of all learners.

A Path Forward

When it comes to the representation of AI, there is a middle ground we can explore. Human shaping is inevitable during this stage of AI development and it is our responsibility to actively mitigate the harmful impacts of biases in AI systems. By implementing robust measures and ethical frameworks, we can ensure AI remains a tool that promotes fairness and inclusivity in education. Fostering collaboration and open dialogue among stakeholders, including educators, researchers, policymakers, and students, can help us navigate the complexities of AI and find common ground in creating technologically advanced and ethically sound AI systems.

By acknowledging and teaching AI about the complexities of our past behaviors, both positive and negative, we can shape it to become a powerful tool for fostering more equitable and inclusive education. Furthermore, incorporating diverse voices and perspectives in the development and decision-making processes surrounding AI in education can help prevent the perpetuation of biases, designing AI systems with inclusivity and fairness in mind instead. This is a crucial step that current AI infrastructure is lacking, mirroring as it does in the tech sector, which has never been representative of the world its creations affect.

We are in an age where we have to ask ourselves what can and cannot be taken over by machine intelligence. By nurturing the human aspect of AI, maybe we can create educational experiences that are not only effective and efficient but also compassionate and empathetic.