Welcome to the first 2023 issue of “The Educationalist”! I hope you had a relaxing break and a good start into the New Year. As we all know, the past few weeks have been dominated by lots of talk about ChatGPT and the impact it may have on education. I don’t want to add to the growing list of articles that offer ideas on how to deal with it (you can find some of the ones I found useful at the end of this post, as usual). What I want to do is to go a bit deeper into the roots of this hysteria (yes, this is how I’ve been perceiving it, both from the perspective of institutional responses and the countless debates on various social media platforms). I am looking forward to your comments and insights, not necessarily on the topic of AI in education but more generally on the academic landscape we are part of and how we can make it more nuanced, critical and welcoming at the same time. Have a nice week!
The way I see it, academia’s response to ChatGPT is more about academic culture than about the tool itself. After all, AI tools are not new, nor is this sort of gut reaction to a new technology, coupled with fear on how it will destroy the teaching and learning landscape as we know it.
ChatGPT obviously opened up a can of worms and universities’ reactions are mirroring the mindset in academia, with quite some unpleasant (though not surprising for any insiders) facets. Here are 7 things that the current debate reveals (if that was even necessary) about our academic culture:
Lots of pressure/ high workloads: regardless of our positions, everyone seems to be under a great amount of pressure to perform. It’s like a hamster wheel and we often feel we don’t have real control on how to best use our limited time and resources. This leaves us vulnerable in the face of new developments as we don’t have enough bandwidth to critically assess the situation and come up with an appropriate response. Hence the panic;
Non-transparent procedures: university administration is very often a black box with missing or inefficient communication channels. This leaves us confused as to where and how decisions are made and can easily fuel a paranoia regarding the institutional responses to external stimuli;
Lack of trust in students: this very harmful narrative is unfortunately a premise for many educators, not entirely (or not always) out of bad will but rather stemming from a teacher-centred paradigm which emphasises the idea of control. Relinquishing some of that control is not an easy t(ask), and something that needs concerted action across all university actors;
Assessment as an afterthought: this is a symptom of a lack of pedagogical vision. The pandemic experience already made it clear that we need to really think about what and how we want to assess, going beyond the traditional modes and making assessment a crucial part of the learning process. This takes a lot of effort and brings the work of educational developers to the fore. Here are some ideas on how to think of assessment strategically;
Stale quality assurance (QA) policies: quality assurance in education is a complex mix of many factors (including faculty professional development, technology integration academic integrity policies, to name just the more relevant ones for the current debate). But for many institutions QA is a mere bureaucratic process (one of the non-transparent ones mentioned above), and this leaves them ill-equipped in situations that require quick policy responses and cross-disciplinary consultations;
Inertia: the biggest enemy, in my opinion. Responding to change in a timely and efficient manner is not one of the strong points of HE institutions. You experience it best when you enter a new institution and learn how many things happen in a certain way just “because we’ve always done it this way”. Thus, disruption leads to panic and overblown reactions, followed by everyone forgetting it and continuing on the same path (I suspect we’ll enter that stage soon);
Technological determinism (especially at higher levels of university leadership): the only thing that is, I feel, equally if not more dangerous that banning technology is thinking it can solve all problems. Universities already spent huge amounts of money on technology platforms in the pandemic and now we risk seeing the same, with AI and AI-detection tools. Very important to keep an eye here on ethical and privacy concerns, to counterbalance this tendency to outsource crucial parts of our education to technology.
All this explains at least to a certain extent the fear carousel we’ve seen (or ridden ourselves) in the past weeks.
But, for the sake of our sanity, and that of our students, why don’t we try to get off this fear carousel? I know it’s not that easy, not least because if we jump while it keeps spinning we run the risk of getting hurt. So we can start by slowing down a bit. Spending some time now in the beginning of the year on some deeper reflection: What is our goal? What motivates us? What scares us (precisely)? What support do we need (from our institution, from peers) and what support can we offer (to students, to peers)?
I know that some of the things I mentioned above are organisational issues and we might feel we have little to no agency in changing those by ourselves. And while that may partly be true, acknowledging these realities, questioning them and positioning ourselves in relation to them is already an important first step. Universities are supposed to be learning organisations, but this is easy to forget when we see debates like the current one, in which panic takes over and can lead to questionable responses like bringing back pen and paper exams, banning essays, banning technology tools, etc, without critically assessing the problem at hand.
Personally, I am genuinely afraid of the negative impact all this debate centred on the "all students (would) cheat" narrative will have on student motivation and engagement with learning. So here's an innovative idea: why don't we take a moment to actually talk to and really listen to our students? To ask them about their personal goals (and help them figure them out), to discuss what motivates them, what scares them (yes, pretty much the same exercise as above). To ask them what support they need, what they are missing in their education. All this will help us understand them better and design learning experiences that make sense to them. Not necessarily assignments where they cannot cheat, but activities and assignments they genuinely want to engage in because they see them as relevant for their present and their future. Technology will keep evolving and I think the important question to ask, alongside “how will it impact education?” is “how will it impact our relations with our students?”.
The good thing is that we are starting to talk about it in our institutions. We should keep doing that, opening this black box we complacently work in- or at least putting some pressure on it to open. However, these discussions are still taking place in silos, and start from a place of fear and threat, so the next step is to start more nuanced conversations and, very importantly, to have an ongoing dialogue with our students. After all, if there is one good thing coming from the ChatGPT debate is becoming aware of the need to constantly reassess what is uniquely human. Testing and critically assessing new tools regularly (instead of banning them) will help us figure out how the limits of that are being stretched and enable us to adjust our role as educators and our students' role accordingly.
5 resources worth checking
There is a wealth on resources on the topic but I chose these five as they provide a good mix of practical advice and deeper discussion:
Assessment in the age of artificial intelligence- great article by Zachari Swiecki et al., with a lot of insights into how we can rethink assessment in a meaningful way:
Chatting and Cheating. Ensuring academic integrity in the era of ChatGPT- interesting read by Debby Cotton et al., suggests a range of strategies that universities can adopt to ensure these tools are used ethically and responsibly;
Academic Integrity?- insightful reflection by Matthew Cheney on the concept of academic integrity and its ethical implications;
Critical AI: Adapting college writing for the age of language models such as ChatGPT: Some next steps for educators, by Anna Mills and Lauren Goodlad- a useful collection of practices and resources on language models, text generators and AI tools;
ChatGPT Advice Academics Can Use Now- very useful advice from various academics, compiled by Susan D'Agostino on how to harness the potential and avert the risks of AI technology.