Researchers aren’t just using large language models like ChatGPT. But they themselves remain responsible for all data and analyses generated using AI. | Image: Keystone / Christian Beutler

AI is a double-edged sword. It facilitates cheating when people publish, but is also a useful tool for science and scholarship. This is why, in May 2024, an international group of experts published ‘five principles of human accountability and responsibility’ to regulate its ethical use in research. First, every publication must make transparent the respective contributions made to it by humans and machines. Secondly, the responsibility for a publication’s correctness and conclusions must lie entirely with the humans who’ve done the research. Then AI-generated data and models must always be annotated so that it’s impossible to confuse them with real, human observations. Researchers must also minimise any potential harm caused by the use of technology – such as in matters of discrimination or data protection. Last but not least, “scientists, […] representatives from academia, industry, government, and civil society, should continuously monitor and evaluate the impact of AI on the scientific process” – all while adjusting and adapting the above principles appropriately on an ongoing basis.