Daniela Mahl is researching into ‘Science Communication in the Age of Artificial Intelligence’ at the Department of Communication and Media Research at the University of Zurich. | Photo: ZVG

As of September 2024, Horizons has included the following statement in its publishing information: “The articles in Horizons conform to journalistic standards. Artificial intelligence may be utilised for specific tasks (e.g., assisting with research or making transcriptions) but our authors write their texts themselves and take responsibility for all content”.

Daniel Mahl, does our focus on the responsibility of human authors make sense to you?

Yes, absolutely. Trustworthiness remains the most important prerequisite in journalism, so people need to assume responsibility.

Is it sufficient?

Your editorial team is here making it clear that you are committed to transparency. Your statement positions you as responsible and open in your use of new technologies. But it’s also extremely important to state exactly for what purposes, and to what extent, AI is used in your everyday editorial work. As it stands now, these aspects remain a little vague. It leaves quite a lot of room for interpretation.

“You use an internal checklist. You could publish it”.

So we should be more precise about the tasks for which AI is being utilised?

Yes. Of course, you can’t do that for every individual article. But you’ll be using an internal checklist for your authors in which you state when AI may be used, and when not. You could publish that checklist to ensure even more transparency.

OK, got it. But what other important principles should we respect?

The German Press and Journalists Federation is insisting that editors should nominate representatives to check whether their use of AI complies with the relevant rules.

“Many tools are not transparent about where their data comes from”.

Do any technical requirements exist for using AI in journalism?

Many tools are not transparent about where their data comes from, nor about how their algorithms work. But there are also more open systems that have an accessible code. The German Press and Journalists Federation suggests certifying AI systems for journalistic use – for instance, in ensuring balance, data protection and security. This could be done in collaboration with politicians or NGOs.

Some editorial teams use a self-created ‘quality hallmark’, like our AI statement at Horizons. But wouldn’t it be better to set up an institution that would control its use across all media?

I see your statement of responsibility more as an endeavour to be transparent. It would be difficult to set up any overarching supervisory authority to monitor continuously how, and to what ends, AI is used in different media across an organisation. The best approach for editors, as things stand, is to develop guidelines and communicate them transparently. And then to set up a controlling authority like that within their own organisation.