"Hello, World!"... revisited
The Educationalist. By Alexandra Mihai
Welcome to a new issue of “The Educationalist”! For the past three years I resisted writing about GenAI in this newsletter. Seems unbelievable, but it was a deliberate choice. Not because I ignored the developments or because I think they don’t have an impact on how we teach and learn. But because, while the topic has been widely debated elsewhere, I preferred to focus on what I consider essential for good education, both at individual and at institutional level. Today, however, I will write about GenAI. I’ll try to put down my thoughts on one particular aspect that I think we really need to seriously consider: how does the use of GenAI impact our relation with our students and with our peer educators? I already asked this question three years ago here, and I stand by many of the aspects mentioned in that post. What I want to do today is unpack the topic to try to find out what we are gaining, what we are missing, and the balance (or lack thereof) between them. I am adding some resources that I found extremely useful in crystallising my thoughts as they broadened my critical perspective on the use of GenAI. I hope you enjoy reading, have a nice week!
Ten years ago, when I was doing my PhD on technology-enhanced learning (TEL), talking about technology with teachers was more about trying to convince them that some digital tools might be useful in their courses. Many responses pointed out using Power Point as “innovative” practice, and I even heard teachers proudly resisting Power Point. Times have changed. Quite radically. Or have they?
The Covid-19 pandemic rapidly exposed many to ways of teaching and learning they had never imagined. Some learned from this experience, some tried to resist by using technology minimally, while others used it as a means to uphold their pedagogical principles, despite the crisis. Universities reverted to in-classroom mode as soon as they could. But little time passed until technology stroke again. This time, seemingly, shaking up the discourse relentlessly till the present day.
Talking to several educational developer friends lately, it turns out all faculty want to know about now is related to the use of GenAI (ok maybe it’s a bit of an exaggeration, but this is the general vibe). To attract attention, professional development opportunities need to have “AI” in their title. I am not saying that we should ignore the topic (we couldn’t even if we wanted to). What makes me sad is that topics like course design or assessment, key to quality education, are losing traction as the GenAI hype is dominating the discourse. Its ubiquity, the fact that literally all the digital environments we visit insistently invite us to use AI does not help in keeping things balanced. Funnily enough, trying to resist the hype could get you labelled as a new Luddite- not very far off from how I perceived those who resisted Power Point a decade ago. Times are changing, new tools capture the attention and imagination, while educational principles unfortunately often end up on the back burner.
We don’t need to keep hearing how the world as we know it will be shaken and transformed beyond recognition.
We don’t need more chatbots that will soon lose their appeal and be forgotten.
We need time to stop and think, resist the false sense of urgency and go back to the roots. How does learning happen? How can we support it?
The illusion of connection
I believe education is about PEOPLE.
That’s essentially why I will zoom in on the relational aspects of using GenAI in the classroom, leaving out many of the other important aspects discussed at length in some of the resources below and beyond.
Building a chatbot for your course seems pretty popular right now. While some find this fun, for me it feels like an upgraded version of “talking to the hand”. Human interaction (student-teacher and student-student) is a crucial part of learning. Using GenAI to outsource routine tasks should, in my view, only provide us with more quality time to interact with our students, to be there for them, to help them unleash their creativity. This is exactly what I would *not* trust a chatbot with. I want to see my students collaborate, challenge and build on each other’s ideas. I want to see teachers get inspiration from each other. This relational side of education is what I consider to be the most valuable.
In the Problem-Based Learning (PBL) environment we have at Maastricht University, collaborative learning plays a crucial role, as students work together to solve real-life problems. Learning depends on everyone’s preparation and willingness to collaborate. It’s often not about finding the right answer but constructively building knowledge together. I am not saying GenAI does not (or should not) play a role in this model, but I am convinced it cannot replace the human aspect of collaboration. I see it in the classroom, when students don’t use their devices and instead work on cases together. The disagreements and debates where they train their argumentation skills. The “aha moment” when everything clicks. I also see it when teachers come together to share their experiences, or when they co-teach and design a course together. This spark of inspiration is nothing like the transactional relation we have with a machine, no matter how convenient and time-saving it may appear.
In the discourse on using GenAI we often hear about the principle of “keeping the human in the loop”. Very important, everyone would agree. But this is becoming more difficult to put into practice. For me, this is not only about double checking AI output, but also trying to get input from other, non-AI sources. And this is increasingly becoming “the extra mile” many are not willing to go, once they got their quick answers. After all, ChatGPT is always there, while colleagues and friends also have a life beside waiting for your questions and answering them on the spot.
In an ideal world, human collaboration and human-AI collaboration would not exclude but complement each other. However, the ubiquity of AI and its ease of access transform it into a seemingly viable and acceptable replacement. The illusion of connection. Something to be mindful of.
What is at stake?
As the end of the year is usually a good time to take stock of our practices, I propose an exercise whereby we honestly look at how the use of GenAI impacts students’ and teachers’ relation with one another and, through this, their learning potential. The idea is to get a clearer understanding of what we get vs what we miss when we favour human-AI interaction to human-human interaction, both as students and as teachers. This analysis can hopefully help us focus on what matters, and turn to each other more often.
Below I will offer my personal assessment of the current situation, so I acknowledge my bias, as I already mentioned I value the relational aspect of education highly. You can do this exercise too, also for other aspects where AI use has an impact. If anything, it can provide clarity and a starting point for intentional engagement with GenAI.
For students
What they get:
A sense of collaboration and peer learning. This is often an illusion. The relation you have with a chatbot is hierarchical (it does what you tell it to do, essentially), so this is far from the genuine collaboration between two peers, with its back and forths, pushbacks and constructive disagreements; it’s in this unpredictable space of human interaction that learning happens;
Validation for their ideas. However, because of the nature of the exchange, this is short lived and never as satisfying, in the long run, as validation from a peer or a teacher.
What they miss:
Learning through social interaction. I strongly believe that looking for answers and debating them with peers, collaboratively is a process that can and should not be replaced or cut short by human-machine interaction. While on the face of it the result may be similar, it lacks the depth and rootedness that learning needs.
Friction and resistance. Deep learning does not (only) need a “yay sayer”, it needs a critical voice that can help you build an argument, something that happens naturally in human interactions.
For teachers
What we get:
Generic learning designs and teaching tips. While these can do the trick when we are busy and have no one to turn to and no time to read educational literature (false sense of efficiency), using them (exclusively) can lead to a monotonous, linear learning experience, less likely to engage your students;
The feeling we are “innovating”. But using a chatbot to outsource some of our interactions with -and among- students, when your course does not really need it or lends itself to it, is just a gimmick. Real innovation would be doing something that really benefits your students, be it with technology or without it.
What we miss:
A sense of community and a sense of being supported. This does not only mean quick answers (to complex questions). It means being rooted in a community of thought and practice, it means empathy, emotional support, practical support, inspiration.
A shared space to exchange, experiment, fail, learn, and try again. As educational developers, we have been working hard trying to build these spaces and it sad to see that the promise of convenience might drive teachers away from them.
Human interactions can be messy, but it can also be very enriching. Its unpredictable, sometimes unreliable. Often confronting. It can be fun but it can also be enraging. We need all these emotions for learning, for growing as human being and as professionals. Please remember this the next time someone tell you the most important skill in your next job will be the use of AI.
I’ve been studying and working with the use of technology in education for almost two decades. In principle, I’m all for it, although I know it may not seem so from this post. But not at the expense of human connection. Technology has the potential to connect us. That’s why it’s so sad to see it now as the vehicle of our isolation (and worse, choice for isolation).
*this post is the result of some very nice conversations with friends and colleagues (you know who you are), that helped me put my thoughts down, despite my reluctance to write on this topic. I would never trade that with bouncing ideas with a (ro)bot. Sorry not sorry.
Resources
Human Literacy, by Eryk Salvaggio
The future of AI and education: Some cautionary notes, by Neil Selwyn
Critical Studies of Artificial Intelligence and Education: Putting a Stake in the Ground
Promoting and protecting teacher agency in the age of artificial intelligence, position paper by the International Task Force on Teachers for Education 2030
Machine teaching? Teachers’ professional agency in the age of algorithmic tools in education, by Tobias Röhl
“Don’t Forget the Teachers”: Towards an Educator-Centered Understanding of Harms from Large Language Models in Education, by Emma Harvey, Allison Koenecke & Rene F. Kizilcec
When the prompting stops: exploring teachers’ work around the educational frailties of generative AI tools, by Neil Selwyn, Marita Ljungqvist & Anders Sonesson
What does current genAI actually mean for student learning?, by Dan L. Dinsmore & Luke K. Fryer
Challenging The Myths of Generative AI, by Eryk Salvaggio
Against Generative AI- a great continuously updated collection of critical resources curated by Cate Denial



Great post Alexandra! I hit on a similar theme in a short post you might appreciate, Braiding AI, where I argue to put AI in human loops rather than vice versa: https://xolotl.org/braiding-ai/
Thanks for this thoughtful post! I appreciate your focus relationships & connections, things that a plagiarism machine cannot replicate. I’ve been thinking about how the GenAI hype has created an opportunity to interrogate our assessment ideals. This is my humble approach: “Our Students’ Humanity Is Worth Protecting: An Argument for GenAI Refusal” (https://zeal.kings.edu/zeal/article/view/116).