Plenary F: Perspectives on Research Integrity and Generative Artificial Intelligence

Banqueting Hall
Wednesday, June 5, 2024
13:00 - 14:30
Banqueting Hall


Generative Artificial Intelligence (AI) – automated technologies that can be used to generate text, data, imagery, and a variety of other media, frequently indistinguishable from human-generated content – is now relevant across all academic disciplines and has profoundly changed the way research can be conducted. The new capabilities of generative AI technology heighten research integrity risks, as well as present opportunities for advancing responsible research. Several questions not yet asked at the World Conferences emerge. The principles of research integrity ensure trustworthy research – Are the established principles of research integrity robust to the development of new technologies, including generative AI? Researchers remain responsible for the trustworthiness of their work irrespective of the tools they use – How can we support the responsible use of generative AI to ensure the positive impact of research? There are many potential positive applications of generative AI technologies in research, and in particular, publishing – Should its use in publishing be banned? New innovations are accessible and in use before policy can adapt – Must the AI Industry do more for research integrity? Can the research integrity community partner with Industry leaders to address these issues?


Agenda Item Image
Dr. Joris van Rossum
Program Director, STM Solutions
STM Solutions

Perspective of publishers and policy makers

Agenda Item Image
Dr. Shanshan Yu
Senior Research Manager
Fujitsu Limited

Perspective from Industry

Agenda Item Image
Prof. Karin Verspoor
RMIT University

Impacts of generative AI on research: The Good, the Bad, and the Ugly


Agenda Item Image
Daniel Barr
Principal Research Integrity Advisor
RMIT University

Technical Support

John Smith