AI and Academic Writing
AI has launched a war on words. For the academic writer, the battlegrounds are paragraphs, abstracts, literature reviews, and reference lists. The small gains and losses, by which conflicts are often assessed, turn out to be sentences and the crafting of words themselves. In this virtual fog, it is impossible to know who is telling the truth, and who and what can be trusted. Indeed, it is even hard to grasp whether AI is your friend or enemy.
Over the past few months, I have experimented with a ChatGPT plugin on my database of notes. To see a paragraph of roughly sketched ideas instantly transformed into articulate, eloquently structured sentences has been profoundly confronting. Some bits were too generic, but others were expressive in ways that were interesting and genuinely valuable. New ideas and themes were inserted alongside my line of thinking, and with some refinement the text was of publishable standard.
When asked to look at my highlights extracted from pdfs and ebooks, ChatGPT merged the arguments and the work of dozens of authors in ways that, I believe will remain hard to detect. Given 700 words of my notes for a new book, it “replicated” four weeks of work in less than 60 seconds, picking up from chapter 2 to sketch out an alternative chapter-by-chapter outline. Much like ChatGPT, I could go on.
Working on topics that straddle the social sciences and humanities, I, like my colleagues around me, have invested thousands of hours learning the craft of writing; climbing the eternal mountain of weaving complexity with clarity, rigour with innovation. Much like other creative processes, writing triggers the anguish and insecurities necessary for a form of inner satisfaction that eludes explanation, yet seems to pull you back in year after year.
To assist with the struggle, I have long used software for PDF databasing and programmes that extract notes and highlights for easy access at the moment of drafting. But experiments in integrating AI into my workflow in the past few months have created significant anxiety. A bomb has gone off that feels existential.
OpenAI, Microsoft, Google, and others herald this technology as your “assistant" or “co-pilot” for making leaps in productivity. They laud its benefits for providing a first draft that overcomes the paralysis that arises from a blank page. Not surprisingly then, the launch of GPT-4 was defined by strident assertions about productivity gains and the conquering of knowledge frontiers. In attempting to better understand the impending transformation to academic life and universities, I embraced the need to learn fast. But it is here that my long-held irritation with silicon valley utopianism also kicks in.
True learning, whether it be for the student or established scholar, only arises from struggle, the slow and painful task of assembling thoughts into coherent arguments that are elaborated and justified at length, over time. It is no coincidence that the very technology AI is built on – the task of anticipating the next word in large language modelling – has its exact parallel in the human task of writing. Any experienced writer knows that it is a craft wherein sentences, phrases, even commas might be laboured over for hours; a process where such effort is continually discarded through time—perhaps weeks and months—spent redrafting. Writing, much like filmmaking, leaves a lot of detritus on the proverbial floor. But evidently there is little or no space for recognising the importance of such nuances, as fundamentally important as they are, within these proclamations of productivity.
Most of the debate about ChatGPT's entry into higher education has addressed its role in the classroom and use by students. However, academics will also take shortcuts, and doubtless they are already using AI to synthesise material and pass it off as their own. The common benchmark of a small number of articles per year to define a research-active academic in the social sciences and humanities seems redundant when every stage of the labour involved in producing these can now be bypassed.
In recent months, I have found it impossible not to contemplate this rapidly unfolding future as I sit and struggle to assemble words that I hope will both capture and advance my thoughts on the topic in hand. I have come to ask myself repeatedly whether there is any point in continuing to exert this sweat when I can hand things over to GPT-4? Why spend a week reading a book when “personalised digests” can now be produced in seconds? Why read twenty articles when software designed for academics “uses AI to find insights in research papers”? Slow labour is now confronted by the utopian mantra of “supercharged workflows” and products that monetise academic knowledge, as data, to create a “Spotify for papers”.
I have, of course, anticipated your answer to these questions. It is in that slow labour, in the struggle that we comprehend, and create the new. But AI redraws the battle lines. We work in an ecosystem, and AI's war on words pertains to the way it seems to wash away the value of slow labour and the critical importance of integrity. Most worryingly, it induces laziness, particularly in a world where the productive is defined in metrics of volume and timesaving.
There is little doubt that AI offers new ways to play with existing metrics in academia. And so as I deliberate the degree to which and at what point I could, or should, integrate AI into my workflow, I wonder whether in the end I am just deceiving myself and the system itself? But cognisant of the extraordinary time-saves it offers, does staying “old-school” risk “falling behind”, as the goalposts of academia move yet again? I added and then deleted “and benefits” to that sentence (after time-saves). In a world of AI “assistants”, how does the five minutes spent labouring such dilemmas get valued when its opposite is being monetised?
Publishers, journals and funding bodies will receive, and perhaps be flooded with, AI written and assisted texts. In this vast fog, it is very hard to know where this is all going. In the medium to long run it may lead to much more meaningful forms of knowledge production, whereby the pyramid of publishing gets ever steeper. We may be heading towards peak production in academic publishing, as the current scholarly and financial logics that drive journal proliferation fade or evolve. Given that we will see those most apt at using these technologies rise across a wide range of fields, regardless of whether they are the sharpest thinkers, it seems as though we are entering a new phase where the pursuit of knowledge as the foundation of academia is further weakened. I will leave it to those better placed to offer predictions of how such transformations and issues will play out.
In the same way as some are arguing that we can no longer examine students on outputs, are the days of assessing academics on their outputs of writing coming to an end given that every conventional output can be faked? If so, how are universities going to build new models to assess contribution and value? We are entering a time when the value of academic expertise, as expressed through publications and reports, is being profoundly called into question. It seems important then to ask how national higher education sectors will be re-mapped in response to AI? How long will it be before university Vice Chancellors and Presidents refuse to sign off on “tenure” for those whose value is primarily defined around teaching and writing?
My way into this conundrum has been the singular topic of drafting text. But dwelling on the minutiae of that helps reveal, I hope, the levels at which AI is going to penetrate the sector and its core values. Indeed, it would be naive to think that universities will not go through the same types of upheaval that other sectors are facing. AI is generating new modes of assessment, marking, performance evaluation, knowledge production and content delivery. It is likely that universities, perhaps en masse, will move to staffing models that increasingly privilege facilitators over professors. Naturally, as new skills are needed, existing forms of expertise will become redundant. And looking more broadly, modelling by Ed Felton et al. lays out how the humanities and social sciences, including area studies, will be among the hardest hit of all occupations.
It goes without saying that academia demands and seeks out original, creative modes of thinking. But AI dramatically raises the temperature on debates about the value of different forms of knowledge production. As this frenemy appears over the horizon, the collateral damage it will create will extend far beyond carefully crafted words, abstracts, and conclusions.
This text was written with no AI assistance.
The views expressed in this forum are those of the individual authors and do not represent the views of the Asia Research Institute, National University of Singapore, or the institutions to which the authors are attached.
Professor Tim Winter is Senior Research Fellow and Cluster Leader of the Inter-Asia Engagement Cluster at the Asia Research Institute, NUS. He was previously Professor and Australian Research Council Future Fellow at the University of Western Australia, and is author of Geocultural Power: China’s Quest to Revive the Silk Roads for the 21st Century and The Silk Road: connecting histories and futures.