
"AI is not just another research tool; it is redefining what research is, how it is done and what counts as an original contribution. Universities are mostly unprepared for the scale of disruption, with few having comprehensive governance strategies. Many academics remain focused on the failings of early generative AI tools, such as hallucinations (confidently stated but false information), inconsistencies and superficial responses. But AI models that were clumsy in 2023 are becoming increasingly fluent and accurate."
"AI tools can already draft literature reviews, write sophisticated code with human guidance and even generate hypotheses when provided with data sets. 'Agentic' AI systems that can set their own sub-goals, coordinate tasks and learn from feedback represent another leap forwards. If the current trajectory continues, we're fast approaching a moment when much of the conventional PhD workflow can be completed, or at least be heavily supported, by machines."
Since late 2022, generative AI has shifted attention at universities from undergraduate teaching to deeper disruption in research. Doctoral education, long framed as an apprenticeship in original thinking, critical analysis and independent enquiry, is under pressure as AI reshapes research definitions and methods. Many universities lack comprehensive governance strategies, while early model failings such as hallucinations are being overtaken by rapidly improving fluency and accuracy. AI tools can draft literature reviews, produce sophisticated code, analyse data and even generate hypotheses; agentic systems can set sub-goals, coordinate tasks and learn from feedback. These changes blur what counts as an original contribution and require students to focus on question framing and scrutinising AI outputs.
Read at Nature
Unable to calculate read time
Collection
[
|
...
]