Dr. Bonnie Stewart is a professor in the Faculty of Education (photo courtesy of Bonnie Stewart)
By Kate Hargreaves
From search results to article summaries, image generators and facial recognition, artificial intelligence (AI) seems to be everywhere.
Bonnie Stewart, a professor in the Faculty of Education at the University of Windsor, challenges the idea that this AI omnipresence is inevitable or even something higher education should embrace.
Having worked in digital pedagogies since the late 1990s, Dr. Stewart’s research focuses on combining educational and sociological lenses to examine how digital tools are used.
“My background has always looked at the ways in which we can use digital tools and technologies to widen participation and open access to education to a broader group of people,” she explains.
Since the pandemic, Stewart notes that the for-profit nature of the digital landscape has been increasingly a barrier to this participatory education experience, with the introduction of ChatGPT in late 2022 further exacerbating the issue.
For Stewart, the interest in digital tools has always been about human beings as they are digitally connected, a focus she brings to the pre-service and graduate courses she teaches in learning and digital technologies in the Faculty of Education.
Despite the recent development of pro- and anti-genAI camps — users and refusers as they are sometimes called — Stewart does not count themself among either group.
“AI has uses,” they say. “GenAI has many uses. I wanted to look granularly at what those uses are based on media theory, critical theory and digital pedagogy tenets that have guided other eras of knowledge and media development.”
Stewart’s concern about GenAI’s mass adoption in higher education is that it’s been driven by a consolidation of power around AI, not by any value proposition related to higher learning. She frames the current buzz around AI as a “hype cycle,” a term trademarked by Silicon Valley advisory firm Gartner in 1995 to describe the trajectory of tech trends.
In this model, a technology or trend emerges, there is an effort to generate news about it, promises are made about its potential and investment increases until it hits a peak and drops.
“Hype is seldom based on concrete promises,” explains Stewart. “It is, by design, a speculation boom: it generates a narrative of innovation and future value, but it is driven by logics of business and media, not education or the public good. Hype’s purpose is to generate investment.”
A core part of the hype cycle model is that eventually, the bottom drops out and the trend crashes, along with a significant amount of capital.
“The problem with AI and the reason there’s been such an amplification of hype over the last eight or nine months is that an incredible amount of money has been sunk into generative AI by very powerful people," Stewart says.
Stewart notes that the corporate lobby for genAI is pressuring governments and educational institutions to adopt the technology, largely because individual users cannot keep the industry afloat financially. “Our sector is being used as a bulwark to keep a bubble from bursting.”
In higher education, Stewart explains, this amounts to an existential threat to how teaching and learning occur, particularly in fields that rely on nuance and critical thinking as opposed to seeking a single correct answer.
She acknowledges many valid and meaningful uses of AI tools, including users creating small language models that are more controllable and less likely to return incorrect or even dangerous answers.
However, she cites instances of genAI chatbots that have encouraged young people to attempt suicide in several cases as evidence of the dangers of these tools being incorporated widely, including into education, without regulatory structures and guardrails in place.
These are the conversations that Stewart is having in her pre-service and graduate-level courses in the Faculty of Education, where she encourages students to explore both the inevitability narrative and the possible uses of AI.
“My students are often users, sometimes enthusiastic users,” they explain. “I don’t present them with the idea that this is terrible and you shouldn’t use it. I also respect students who are refusing to use it. I try to ensure we do a fair amount of critical reading.”
“The key piece that I’m really trying to teach is that there is value in agency in education,” Stewart says.
“We cannot turn education entirely into outputs. If everybody is missing the learning process and just creating outputs, then getting feedback they don’t read from bots instead of educators, we have a very empty process, and we have gutted the whole purpose of higher education and our own capacity to think.”
For those who may find themselves feeling lost in the AI discourse, Stewart shares what she believes are the key messages everyone should take away.
“It has value, but no tool is inevitable. The forces pushing it as inevitable are social forces, not technical forces.”
“We need to recognize the climate impacts of this tool and that there are material well-being impacts for users. I want to encourage people to be intentional and to be aware that holding onto our human agency is critically important. There is great power in considering our own agency and what we want to protect — not in opposition to AI but in protecting boundaries of what we don’t want to give to AI.”
To read more from Dr. Bonnie Stewart on critical digital pedagogies, visit their website.
For more perspectives on AI from the Faculty of Education, the Journal of Teaching and Learning, edited by Professor Clayton Smith, recently released its special edition: AI and Machine Learning Intensifies Digital Transformation of Higher Education: Opportunities, Possibilities and Challenges. The issue features research on the rapidly changing world of AI and machine learning from around the world.