AI Governance Library

AI in Strategic Foresight: Reshaping Anticipatory Governance

AI is increasingly used by foresight practitioners to accelerate horizon scanning, trend analysis and scenario development, but it raises significant challenges around trust, bias, transparency and governance.
AI in Strategic Foresight: Reshaping Anticipatory Governance

⚡ Quick Summary

This white paper by the OECD and the World Economic Forum examines how artificial intelligence is reshaping the practice of strategic foresight and, by extension, anticipatory governance. Based on a global survey of 167 foresight practitioners from 55 countries, it shows that AI is already widely used to support early-stage foresight activities such as horizon scanning, trend clustering and scenario drafting. The report positions AI as an augmenting force rather than a replacement for human judgment, creativity and interpretative capacity. While practitioners value efficiency gains and expanded analytical scope, the paper highlights deep concerns around reliability, bias, lack of transparency and weak ethical governance. Overall, the document frames AI as a catalyst that can strengthen foresight capabilities, but only if embedded within human-centred workflows and robust governance structures.

🧩 What’s Covered

The paper opens by situating strategic foresight as a core capability for resilience and long-term governance, then explores how AI is altering this field in practice. It presents detailed survey findings on AI adoption across sectors, showing that private-sector practitioners report far higher confidence and skill levels than those in the public sector, academia or civil society. Most foresight experts rely on off-the-shelf tools, often using multiple systems simultaneously, while only a small minority operate fully customised, end-to-end AI-enabled foresight workflows.

A central contribution of the paper is its three-level maturity model for AI integration in foresight: basic analysis augmentation, AI as a creative sparring partner, and fully integrated, customised AI workflows. The majority of practitioners remain at the first level, using AI primarily for synthesis, scanning and sense-making. More advanced uses, such as automated signal detection, scenario stress-testing and complexity mapping, are still rare.

The report systematically maps perceived benefits, including time savings, large-scale data processing, idea generation and improved accessibility for non-experts. It also provides a granular breakdown of challenges: ethical and governance gaps, technical skill shortages, leadership resistance, data security constraints and the misalignment between AI outputs and human-centred foresight methods. A dedicated section examines future risks, including deskilling, over-reliance on AI and erosion of trust in foresight outputs if quality cannot be assured.

💡 Why it matters?

The paper matters because it treats AI in foresight not as a technical upgrade but as a governance challenge. Strategic foresight directly informs policy, regulation and long-term investment decisions; weaknesses introduced at this stage propagate downstream. By highlighting the uneven distribution of AI skills and the lack of ethical frameworks, especially in the public sector, the report exposes a growing anticipatory capacity gap. For AI governance professionals, the findings reinforce the need to integrate foresight, risk management and ethical oversight early in the policy and organisational lifecycle. The document also provides a strong empirical basis for arguing that trustworthy AI requires anticipatory governance, not reactive compliance.

❓ What’s Missing

While the paper diagnoses challenges clearly, it stops short of offering concrete operational guidance. There is limited discussion of how existing AI governance frameworks, standards or forthcoming regulations could be translated into day-to-day foresight practice. Case studies of mature, fully integrated foresight workflows would have strengthened the practical value. The report also underexplores the implications of generative AI agents and autonomous systems, despite briefly referencing them, and does not address how foresight teams should audit or validate AI-generated futures in a systematic way.

👥 Best For

This resource is best suited for public sector foresight units, policy strategists, AI governance and risk professionals, and organisations responsible for long-term planning under uncertainty. It is particularly valuable for leaders seeking to justify investment in AI literacy, governance frameworks and human-centred foresight capabilities.

📄 Source Details

Joint white paper published by the OECD and the World Economic Forum in November 2025, based on a global survey of foresight practitioners across government, business, academia and civil society.

📝 Thanks to

OECD Strategic Foresight Unit and the World Economic Forum Strategic Foresight team, with special recognition to the contributors who shaped the survey and analysis.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.