Exploring what matters: getting the most out of educational technology research
The Educationalist. By Alexandra Mihai
Welcome to a new issue of “The Educationalist”! Two weeks ago, I took part in a very interesting discussion on the direction in which research and scholarly publication on educational technology is heading nowadays. The discussion was part of a workshop organised by the International Journal of Educational Technology in Higher Education on it’s 20th anniversary (thanks a lot to my colleagues from the Universitat Oberta de Catalunya (UOC) for their kind invitation). Preparing for this event was a great opportunity for me to reflect on how I engage with this body of research, from various perspectives- author, reviewer, consumer- and what I am actually missing, as well as how I experince the research culture in universities. I am sharing my reflections with you here and I hope they spark some ideas on future research paths on educational technology in higher education. Enjoy reading and have a nice week!
A decade ago, I was an early career researcher having decided to switch disciplines, leaving behind political science to enter what was then a new field for me- educational science. Two things were clear to me: I had a passion for education and I had gathered a reasonable level of expertise on designing, delivering and supporting faculty to deliver online and blended education. All the rest was new, exciting but also scary. Diving into a new area of research- education science- and even more specifically the use of educational technology meant I was spending long hours at the library (the IoE Library in London) and having many revelations regarding theories, models and even terminology (you can read here on one my first blog posts, from 10 years ago, on exactly this topic).
That was when I started seriously engaging with the literature on educational technology. Since then I became “a regular”, both contributing as an author and using it as evidence in my teaching and my educational development work, supporting faculty (and sometimes also students) to use educational technology effectively and meaningfully for teaching and learning. This provides me with a mixed researcher/ practitioner perspective, which I find useful in taking a broader view on how research in the field has been developing and where it is/ should be heading.
Reflections on the status quo
Looking at how the body of literature in this field has evolved in the past decade, one thing is easy to notice: the Covid-19 pandemic led to intensive use of educational technology in education and, consequently, also to more research on it. The same thing we notice now with GenAI. But my question is: is this the research we need? And how about its quality? Does “more” equal “better” in terms of output?
I’ll start with two observations on current educational technology research (apologies for the generalisations, my aim was to see some underlying trends):
Research is often focused on the technology tools. With tools coming and going at a very quick pace, (how) will this research stand the test of time?
Research often offers snapshots of educational technology use, in (too) many cases neither systematic nor rigorously researched, based on interventions that cannot be replicated or lacking a clear connection to educational theories.
And here is what I am currently missing:
more research on how educational technology is embedded in the learning process. This implies explicit and well-argued links to learning theories, going beyond simple use cases;
more research on how teachers use educational technology and what their challenges are;
more research stemming from the existing needs of teachers and students. I see a disconnect between what is being heavily researched (often what is fashionable) and what the real needs of teachers and students are. This is important because the closer we are to the current educational needs and challenges the higher the chances that the recommendations researchers come up with will be translated into practice and policy (e.g. strategic decisions at university level).
A wish list
Based on my analysis above, here is what I would like to see in the future in the field of educational technology research:
More research that goes beyond tools and provides valuable insights into how educational technology can enhance the learning process in all its complexity. We are chasing a rapidly moving target, so the key to remain relevant is to focus on technology features and how they connect to learning, rather than on tools.
More studies on teacher’s use of educational technology, shedding light on how to meaningfully integrate technology at both course and curriculum level.
More research on how technology can contribute to addressing the increasing diversity of our learner population, by providing a flexible multi-modal approach to education.
More studies focusing on the institutional level, looking into strategic approaches to educational technology integration (short, medium and long term). These studies are usually published in journals with a broad higher education or higher education management focus, but they should also feature more prominently in educational technology journals. What is the urgency here? Every day decisions are being made and lots of money is being spent by universities and most often this process is not based on evidence. So we need more and richer data from a variety of contexts in order to make the right decisions.
The impact of GenAI on the research and publication process
While we all have our wish lists, we also have to acknowledge reality and one of the factors we cannot ignore these days is the impact various GenAI tools have on the way we do our research and also on the publication workflow.
When used appropriately, many of them can be helpful at various stages of our research, such as processing or structuring a large amount of data. But I find writing a very personal process on which I am not willing to relinquish oversight. Writing is intrinsically linked to how we think and how we build an argument. Overreliance on a tool can, in time, lead to a decline in our cognitive ability, which is a very dangerous path to go down, as Mutlu Cukurova (UCL) pointed out during the discussions. Denise Whitelock (Open University UK) referred to this phenomenon as “lethargic cognition”, and it can also be linked to deskilling. Wouldn’t that be ironic, really, in a time where we keep talking about lifelong learning and upskilling to consciously give up on some of our crucial skills?
That is why it’s so important to maintain our agency in the research and writing process, which can be seen as part of a more holistic definition of AI literacy. It’s not only about learning how to interact more efficiently and effectively with LLMs but also about making decisions regarding when and for what (parts of) processes we want to interact with GenAI.
We need to remember that “AI can work spectacularly, but fails discreetly”, as Robert Clarisó Viladrosa (UOC) correctly pointed out. Often one needs specialised knowledge to notice hallucinations or factual mistakes. This is why we need to stay in the loop in our various roles, as researchers, teachers, editors or reviewers.
For publishing, this means more attention to ethical considerations (even more focus than before on questions of intellectual property, in a time when platforms are willing to give away our content for LLM training purposes), more serious discussions on who takes responsibility for inaccurate output (which constitutes fraudulent behaviour), increased workload (filtering out submissions written with a substantive AI contribution). It’s becoming clear that human oversight is crucial (very much like in the case of the research process), together with transparent policies and an open dialogue among all the actors involved. Hiding the topic under the carpet will not make it disappear.
The elephant in the room
All this leads me to the most important question of all, one that underscores all the considerations above: What are the limits of productivity and efficiency in terms of research?
Just by taking a few moments to reflect we can come up with a few ideas: pressure to have a high publication output leads (sooner or later) to trade-offs in terms of quality, as well as the temptation to take shortcuts, such as using GenAI to speed up the process and be more efficient.
The root cause, thus, lies beyond individual practices, at the institutional level (you see now why I think we need more research at this level). In order to address this, it takes concerted efforts to review and in many cases redesign promotion criteria and recognition systems. Moving away from the “publish or perish” mentality and focusing more on quality (originality, depth of reasoning, scientific rigour) instead of quantity would lead to nourishing a healthier academic culture. Yes, this is not something that can be easily achieved as it requires a chain reaction, with many actors needing to align strategy and action. But the good news is that it does also create a chain reaction, with hopefully more and more departments and universities starting to play this version of the game. What we can all do now is start having these difficult discussions in our teams and work towards a common understanding of our boundaries.