South African scientists are rewarded for every paper published. At some universities, the rewards are rather attractive: R50,000 (or $2500) per paper, as long as it is on the list of subsidised journals maintained by the Department of Higher Education. So, in theory, a productive researcher could earn a nice annual bonus if they publish five or six papers. In fact, there are already a handful of authors producing more than 50 papers annually. Do the math: they are multimillionaires, not accounting for their normal salary.
But here’s my prediction: AI will soon allow all of us to become superproductive researchers. I’ve written about how AI has already boosted my research productivity, particularly in speeding up my ability to code. But a new research paper, published in January, takes it to another level: it shows how large language models can now be used to automate the generation of academic papers.
The authors’ field of study is finance. Using a data-mining and validation protocol, they identified 96 statistically significant stock return predictors from accounting datasets and generated 288 fully structured academic papers with minimal human intervention. Each paper followed standard conventions, including comprehensive data descriptions, results sections and theoretical justifications. Says the authors: ‘While the papers and their theoretical frameworks are automatically generated, it’s important to note that all empirical analyses and statistical validations are conducted using rigorous methods developed in the academic literature, ensuring the reliability (if not the interpretation) of the underlying findings.’ Let me be clear: these papers look like the real thing: they will likely publish better than most South African finance papers produced last year.
This raises at least three important questions. The first concerns epistemological rigour. Although the statistical tests were carefully used, the theories were often shaped by the results instead of being developed first. This practice, which the authors call HARKing (Hypothesising After Results are Known), undermines the core principle of scientific inquiry that theories should precede empirical findings. A related concern is the reliability of sources, as some citations were fabricated – an issue that will likely diminish as LLMs become more advanced.
The second is about academic integrity. LLMs can democratise research by lowering barriers for scholars, especially those in resource-limited settings or non-native English speakers. However, their ease of use risks flooding academia with low-quality, theory-light papers and artificially inflated citation counts. As the authors suggest, stronger peer review and a greater focus on theoretical originality over statistical significance are needed to maintain scholarly integrity.
But here’s the thing: that January paper is already outdated. In February, Google announced the AI co-scientist, a multi-agent AI system built on Gemini 2.0 that goes beyond automating paper writing – it actively generates novel hypotheses, research proposals and experimental protocols. Unlike standard literature review tools, the AI co-scientist mirrors the reasoning process of the scientific method, using a coalition of specialised agents to iteratively refine ideas. Early tests in biomedical research have already led to real-world validations, such as repurposing drugs for acute myeloid leukemia and identifying new targets for liver fibrosis treatment.
The system’s self-improving nature, combined with expert-in-the-loop guidance, suggests a future where AI is not just an assistant but a genuine collaborator in scientific discovery. As the economist
notes, if research can be generated on demand with AI producing better, faster and more novel hypotheses than human researchers alone, then the traditional ‘presearch’ model – where knowledge is accumulated in advance of need – may well become obsolete. The value of research may no longer be in producing papers, but in directing and validating AI-driven discoveries.AI guru Ethan Mollick summarises it best: ‘We are starting to see what “AI will accelerate science” actually looks like.’
This raises the most important question: what happens to South Africa’s research funding model? Because of the monetary incentive system in South Africa, superproductivity will collapse the financial model that supports research at universities. Because researchers are paid for research quantity rather than quality – a paper published in Science receives the same funding from the government as a paper in an unread local journal – researchers who rely on research funding for attending international conferences or supporting research assistants will soon find their cost points empty – except if they, too, start producing hundreds of papers.
But it is not only researchers that depend on this funding source; South African universities, too, rely on the pay-for-publication model. Each subsidised paper research unit earns the university R127 511 (in 2024). For a university like Stellenbosch University, where I teach, this income stream is 40% of its total revenue, excluding investment activities (compared to 30% for student fees, and 21% for grants & contracts, 5% for donor funding and 4% for sales & services). The truth, though, is that this income stream is zero-sum: there is a fixed government budget to fund all these published papers. That means that if UCT increases its research output, it will get a larger share, and all the other universities will get less. As a consequence, researchers who use AI to publish an exorbitant number of subsidised papers will implode an already fragile system.
The government has not helped to reduce this fragility. Minister Nobuhle Nkabane’s December (perhaps illegal) directive for public universities to freeze student fees despite rising costs reflects a decade-long pattern of implicit fee regulation under the so-called ‘social compact’. While government subsidies have stagnated and, in some cases, declined in real terms, the National Student Financial Aid Scheme (NSFAS) has become increasingly unsustainable, with capped allowances and outdated eligibility criteria failing to keep pace with inflation. As Stellenbosch University’s Stan du Plessis and Wim de Villiers explain in a BusinessDay-column at the end of 2024, South African universities, already dependent on limited revenue streams, are now being forced to bear the financial burden of maintaining accessibility without the flexibility to adjust their fees to ensure long-term stability. This approach risks not only the financial independence of institutions but also their ability to support research.
Of course, funding pressures are a reality for universities elsewhere, too. In the Netherlands, staff at Leiden and Utrecht Universities went on strike last week because of funding cuts. In the UK, as John Gapper writes in FT, public funding covers only about 70% of research costs, leaving institutions reliant on international fees to bridge the gap. Gapper suggests that the only alternative is for UK universities to align more closely with the private sector through partnerships and industry-funded research.
That, too, is the only solution I see for South African universities that hope to remain contributors to meaningful research. Assume the government funding model is obsolete. Focus exclusively on high-impact research – make appointments and promotions based on the substance of scholarly contributions, not sheer volume – and invest strategically in divisions that actively pursue private-sector collaboration. Sure, that means intellectually valuable fields like olfactory epistemology or the fiscal asymmetries of colonial intermediary trade networks may struggle to attract research funding. But the counterfactual is even worse: a proliferation of superficial publications that all compete in a race to the bottom and, most importantly, do little to advance knowledge or contribute to building a better society.
An edited version of this article was published on News24. Support more such writing by signing up for a paid subscription. The images were created with Midjourney v6.
You describe the system well, but your predictions are wishful thinking, I'm afraid..
Researchers and universities became addicted to this funding model, where quantity is all that matters..
UCT and SU probably don't abuse the system that much, but there's an entire different planet outside the Cape Peninsula..
Thanks Johan - the concerns in this piece resonate beyond academia — investigative journalism faces a similar reckoning, albeit without the financial incentives universities enjoy for publishing research.
These tools can greatly increase the speed of research, crunching of data, trend and pattern spotting in financial statements and the like; not to mention the ease of publishing this story in multiple formats, from text to video and on numerous platforms.
But the real challenge isn’t in the production of stories — it’s in the decision-making around what gets published and how. The ecosystem of verification, editorial oversight, and ethical checks — the very things that should slow down a journalist in the name of rigour is at risk. Many investigative journalists today, despite their best efforts, are less experienced, under-resourced, and pressured to work at AI-driven speeds.
Much like academia faces the risk of an AI-fueled flood of theory-light research, investigative journalism is staring down a similar crisis — one where the pressure to produce outweighs the discipline of verification. We already have a trust problem on our hands.