
IQ Media Summer School: Artificial Intelligence, Transparency, and Accountability in Journalism

Nick Diakopoulos | Photo credits: Dimitris Adamis
The IQ Media Summer School kicked off with a keynote from Nick Diakopoulos (Northwestern University) examining the systemic impacts of AI on the information ecosystem. He advocated for foresight through scenarios as a method to think rigorously and systematically about the future and anticipate risks. His team’s Journalism Futures Project collected over 800 scenarios across more than 70 countries, revealing that AI is consistently framed as both a powerful tool for productivity and a threat to credibility, trust, and jobs. Notably, the results displayed a significant lack of consensus around how people think about AI and the future. The complexity of the network suggests a need to look at the interconnection of factors across the whole information ecosystem, rather than focusing on linear relationships, in the pursuit to understand it. Diakopoulos also introduced the idea of systemic policy simulation as a way to study interventions across the whole ecosystem, using LLMs to iteratively write and rewrite scenarios under specific policy conditions to simulate their impact. The keynote concluded that scenarios are both data and a form of collective thinking; by analysing them with LLMs, researchers can identify leverage points in the ecosystem, test potential policy interventions, and guide future investments that amplify AI’s benefits while mitigating harms.
Furthermore, the session, “AI in Data: From Classroom to Newsroom,” by Martin Chorley, highlighted the integration of AI into both journalism education and professional practice. Drawing on the experience of the MSc in Computational and Data Journalism, Chorley emphasized how this cutting-edge degree combines coding, data analysis, and journalistic techniques to create visual and interactive news stories. AI emerged as a core component of this process, acting as a workflow enabler that can support ideation, prototyping, and planning of data analysis. While these tools do not necessarily require deep technical knowledge to begin using, the speaker stressed that knowledge and critical thinking remain essential. Large language models (LLMs), for instance, do not truly “know” or “understand” information. They can assist with brainstorming and early experimentation, but once projects move into more technical territory, journalists need a solid foundation to verify, expand, and maintain their work. The session concluded with a balanced perspective: AI can be a powerful aid, but it also carries the risk of inaccuracies and misleading results. For journalists, this means that prompts are crucial, and AI should be seen as a helpful partner – useful, but never fully trusted without human oversight.

Bahareh Heravi and Martin Chorley
Two panels moderated by Bahareh Heravi featured leading voices from across the media industry to explore the challenges and opportunities of AI adoption in newsrooms. The first panel, “AI Innovation in Newsrooms: From Promise to Impact,” brought together representatives from the BBC (Laura Ellis), Financial Times (Oli Hawkins), Tamedia (Titus Plattner), and Reach plc (Karyn Fleeting) to discuss their organisations’ approaches to AI adoption and innovation.
- The BBC takes an encouraging but cautious approach to AI experimentation, especially in the case of audience-facing material. Ellis emphasised the importance of having corporate-wide discussions across departments within the organisation and drew attention to the benefits of media organisations coming together to tackle the challenges of AI in the newsroom.
- Fleeting described a similar low-risk approach to AI at Reach plc; the organisation focuses on driving AI literacy and adoption while balancing risk mitigation with innovation, and highlighted the importance of stakeholder accountability. In particular, extra care is taken to mitigate risks around AI and intellectual property.
- The Financial Times, underlining the importance of trustworthiness and factual accuracy in its news output, has employed AI largely for low-risk, internal use cases; any use of AI in the production of journalism at the company must comply with editorial policy and go through internal clearance. On the web front-end, the FT encourages individual experimentation through its partnership with OpenAI, providing access to ChatGPT for all members of the organisation without any data being retained by OpenAI; for programmatic and systematic use of AI, the FT is testing out the use of local models, citing security concerns around using commercial models for sensitive data. Examples of internal tools built by the FT include wire services to personalise news story feeds and Datawatcher, a ML model trained to consume charts and produce daily briefings for data journalists.
- For Tamedia, where culture transformation in the company remains the biggest challenge, the focus lies in encouraging AI adoption and literacy —teaching journalists not to fear AI, but engage with it critically. Having invested significantly in an AI tool-building team, Tamedia is now looking to achieve larger productivity gains via internally developed tools, as well as considering launching user-facing applications which are human-on- and human-out-of-the-loop; here, the organisation differs to the BBC and Reach plc, which continue to take a strictly human-in-the-loop approach to AI for audience-facing output.
Although each organisation is taking its own approach to AI adoption and innovation, all of them are actively engaged in building their own tools on top of buying commercial licenses, subject to time and resource constraints. For example, the FT is investing in building tools around certain data sources to create a long-term competitive advantage; Reach plc builds where possible, buying instead where there is an immediate requirement; the BBC, while buying a lot, also builds a lot around it; and Tamedia looks to build tools that are not already on the market and accessible.

Catherine Sotirakou | Photo credits: Dimitris Adamis
Following on from this, the panel, “AI Literacy in the Newsroom,” chaired by Bahareh Heravi, explored in more detail how different organisations are approaching the challenge of fostering AI literacy in the newsroom with panellists Laura Ellis (BBC), Fleeting (Reach plc), Cheryl Phillips (Stanford University), Plattner (Tamedia), and Cathrine Sotirakou (IQ Media). The BBC, for example, has taken a value-driven approach to AI training through a mandatory training course based on company-wide AI and transparency guidelines, designed and delivered by the BBC Academy; additionally, practical support is offered through a network of official AI leads embedded across teams. Similarly, Reach plc conducts mandatory AI training across the entire organisation in the form of an annual test-based module, which all employees are required to pass; a network of AI champions across each team is responsible for cascading knowledge and gathering feedback, while AI materials, such as their AI policy framework and catalogue of approved tools, are featured prominently on the company intranet.
Tamedia has adopted a mix of mandatory and optional courses but emphasised the difficulty of persuading journalists to prioritise AI training amid daily deadlines, noting that management commitment is crucial to overcoming inertia in AI adoption. Meanwhile, the Big Local News initiative at Stanford provides AI training for 40 local news organisations across California, some of which do not have AI policies or policies forbidding the use of AI. This includes training with specific tools and data to demonstrate the validity of AI outputs and teach responsible and practical use of AI through real-world experience with built-in guardrails. In Greece, which recently became one of the first countries in Europe to have AI guidelines for working journalists after the Panhellenic Federation of Journalists’ Union published ethical guidelines on the use of AI, IQ Media offers various training courses in AI, as well as running masterclasses and Ask Me Anything (AMA) sessions for several European news organisations. While each organisation differs in its approach to AI training, key takeaways can be summarised as follows: in newsrooms, AI literacy must extend beyond editorial staff to the wider company; facilitating hands-on, practical experience and exposure to AI is key to developing AI literacy and understanding; and enthusiasm about AI across all levels of management makes a difference. Despite cultural barriers and some resistance to change, strong leadership and sustained training initiatives are beginning to close literacy gaps and foster a more confident, responsible use of AI in journalism.
This article was written with contributions from Zewei Jin.