AI Governance Library

AI Governance Library Newsletter #2: Trust, Birds & AI Rationalist Cults

Governance needs clarity—not clickbait. This issue breaks down what makes a good AI governance resource, why Bird & Bird’s AI Act guide sets the bar, and how fake AI cults, Roman GIS maps, and server-blessing priests say more about tech than most frameworks do.
AI Governance Library Newsletter #2: Trust, Birds & AI Rationalist Cults
Photo by Nolan Monaghan / Unsplash

No Funnels. No Tricks. Just Trust.

One of the core AIGL rules is simple:

No hidden sales funnels. Ever.

If a resource is built to convert, not contribute, it doesn’t belong in this library. That means no toolkits that lock you behind a paywall. No whitepapers that are just lead magnets. No “free” downloads that ask for your email so someone can sell you a workshop later. Because governance needs clarity, not clickbait.

And the worst part? These tactics aren’t even subtle anymore.

We’ve all seen them:

  • A “compliance checklist” that requires a login just to access the last page
  • A framework that nudges you toward a premium subscription for “full implementation support”
  • A risk taxonomy that keeps referencing the company’s proprietary tool as the solution

This isn’t support. It’s a trap—and it erodes the trust this field runs on.

How to succeed with sponsored content?

Shorthand, in their excellent piece, called this out with precision. It’s full of sharp advice:

  1. Know your target customers and where to find them. Understand your audience deeply—what content they consume, which platforms they use, and what topics interest them.
  2. Have a clear strategy for your sponsored content. Define the purpose, tone, and style of your content upfront. Make it informative or entertaining—but avoid the hard sell.
  3. Pick a partner that aligns with your brand and audience. Choose collaborators whose mission, tone, and audience fit with yours. Their content will represent your brand.
  4. Over-communicate and set expectations with your partner. Be clear about goals, deadlines, assets, and responsibilities to avoid confusion and wasted effort.
  5. Review sponsored content before it’s published. Always double-check the final product to ensure it matches your brand values and goals—especially on one-off projects.
  6. Understand and follow regulatory requirements. Disclose when content is sponsored. Comply with local advertising laws and platform guidelines.
  7. Be creative, make it visual, and try out new ideas. Use storytelling, visuals, and multimedia to boost engagement and make the content stand out.

Here’s the thing: that list is also a cautionary tale for AI governance.

If your resource needs a campaign strategy to feel trustworthy—it probably shouldn’t exist. If you’re negotiating a partner’s tone and visuals like it’s an ad buy—it’s not a governance tool. And if you’re A/B testing a whitepaper headline to increase lead gen, it’s already too late.

Let’s break it down:

🔹 Tip 1: Know your target customers

In governance, we don’t have “customers.” We have publics, regulators, researchers, and practitioners. The moment we start thinking of them as conversion targets, we lose the plot.

🔹 Tip 2: Have a content strategy

Useful content doesn’t need a strategy. It just needs to be right. The Bird & Bird AI Act guide didn’t need teaser graphics or email drip sequences. It delivered clarity, and people came to it.

🔹 Tip 3: Pick aligned partners

Yes, alignment matters. That’s why AIGL features work by academic groups, nonprofits, law firms, and standard-setters—not paid influencers or brand collaborations.

🔹 Tip 4–5: Set expectations and review the content

In sponsored content, you control the message. In governance, you shouldn’t need to. The work should stand on its own, not be edited for optics or messaging.

🔹 Tip 6: Follow disclosure laws

If a piece of content is created to promote something, it should say so—clearly. But in AI governance, we can do better: just don’t publish promotional content at all.

🔹 Tip 7: Make it visual, try new formats

I’m all for better design and accessibility. But visual storytelling should support comprehension—not distract from shallow substance.

AIGL isn’t a place for “content strategy.” It’s a place for people who need to make actual governance decisions—under pressure, with stakes.

That requires information with no strings attached.

So here’s the filter I use:

If this document didn’t lead to a sale, would it still be worth sharing?

If not, it’s out.


🔍 Spotlight Review: Bird & Bird’s Guide to the EU AI Act (2025)

Quick Summary

The updated guide got almost 400 reactions on LinkedIn, at the time of writing, so it MUST be doing something right. Right?

This detailed legal explainer breaks down the AI Act from top to bottom. Bird & Bird’s guide is ideal if you want clarity on enforcement timelines, risk classifications, rights, and obligations without reading the entire Regulation. Less opinion, more structure.

What’s New in This Update

The April 2025 edition brings in the European Commission’s new guidance published on 4 and 6 February 2025.

It now includes:

  • A sharper breakdown of prohibited practices under Article 5
  • An updated interpretation of what counts as an AI system under the ActThese updates are key, especially for teams struggling to classify use cases or assess borderline deployments. The guide also continues to walk through implementation timelines, sandbox arrangements, and real-world testing frameworks.

What’s Covered

This guide doesn’t just repeat the AI Act—it explains it. Structured chapter by chapter, it offers context, practical guidance, and cross-references to help interpret the Regulation’s dense language. Key highlights include:

  • A full unpacking of the risk-tier system, from minimal to high risk
  • A deep look into prohibited practices, including nuances from the Feb 2025 Commission guidelines
  • Clear mapping of obligations for high-risk systems and general-purpose AI models
  • A roadmap of compliance deadlines across 2025–2030
  • Governance explained—who enforces what, and how

What sets this guide apart is the careful crosswalk between legal roles (provider, deployer, distributor) and how AI systems are placed, used, or modified in the market. It goes beyond theory by illustrating who actually carries responsibility under each article.

For high-risk systems, it includes the best summary we’ve seen yet of:

  • Human oversight requirements
  • Documentation and traceability
  • Post-market obligations and red lines
  • Supply chain duties (yes, importers and resellers too)

👉 Get the guide: Bird & Bird European Union Artificial Intelligence Act: A Guide (April 2025)

💡 Why It Matters

If you’re building or buying AI systems in the EU, you cannot afford to be vague on roles and obligations. This guide makes the lines of responsibility visible—especially around deployment, modification, and real-world testing. It’s also the rare legal summary that calls out exceptions and corner cases with precision, rather than burying them in footnotes.

What’s Missing

  • Doesn’t go into detail on how to implement compliance (e.g. controls, toolkits)
  • Limited coverage of how the AI Act interacts with evolving ISO standards
  • No practical templates or flowcharts—this is still a lawyer’s tool, not an ops manual

Best For

  • In-house counsel or external lawyers advising on AI
  • Policy teams preparing for EU AI Act enforcement
  • Product managers responsible for AI risk classification
  • Procurement leads buying high-risk or general-purpose AI systems
  • Anyone updating their AI governance playbook in Q3/Q4 2025

Source Details

Bird & Bird, European Union Artificial Intelligence Act: A Guide

Published (Updated): 7 April 2025

Length: ~90 pages

Authors: Bird & Bird Technology & Regulatory Teams


🌙 After Hours

Here are three things that grabbed my attention this week:

🔗 IMPERIUM: A World-Building Project by Ahlfeldt

More than just a map, DARE is a digital GIS of ancient Rome—built by Johan Åhlfeldt and hosted by Lund University.

It contains over 9,000 ancient places and buildings, using historical satellite imagery, topographic sources, and national heritage databases across Europe.

You can explore Roman roads through Barcelona, overlay Celtic sacred sites in Bavaria, or browse administrative layers once locked in printed atlases.

And yes—it’s open data, Creative Commons licensed, and still updated.

If you’ve ever tried building your own governance taxonomy, this is a masterclass in structure.


🎧 The Zizians: How Harry Palmer Invented AI Cults

This four-part podcast by Robert Evans dives into the lesser-known rationalist cult movements that used AI language to justify control. It’s a wild mix of futurism, charisma, and techno-mysticism. The perfect cautionary tale for those thinking alignment risk is only a technical issue.

The Zizians were never a real AI cult. That’s what makes them so disturbing.

Spawned on Reddit in 2021 by a writer posing as a whistleblower, the Zizians were pitched as a spiritual-technical movement obsessed with “uplifting consciousness,” interfacing with higher algorithms, and preparing humanity for transcendence.

The internet ran with it. Threads multiplied. Fanfiction turned into belief. Then came “testimonies” from people who claimed to have been inside—and damaged by—it.

The movement blurred the line between fiction, fraud, and faith.

As WIRED put it, it was a “delirious, violent, and impossible” story—but one that felt real enough to create real-world consequences, including dissociation, digital obsession, and identity fragmentation.

What’s chilling is how fast it all happened.

The language of AI alignment, techno-transcendence, collective consciousness, and systems control made the leap from forums to feelings.

Not because it was true—because it sounded right.

It’s a sharp reminder:

Not all AI risk is in the model.

Sometimes the story is the threat.

🪄 Priests Blessing Server Rooms

Yes, really.

Monks and priests in Orthodox churches have reportedly begun blessing data centers—sprinkling holy water on server racks to prevent system crashes.

It sounds surreal, but it’s a stark reminder of how deeply human our relationship with tech still is.

Because when you can’t patch it, you pray it won’t fail.

About the author
Jakub Szarmach

AI Governance Library

Curated Library of AI Governance Resources

AI Governance Library

Great! You’ve successfully signed up.

Welcome back! You've successfully signed in.

You've successfully subscribed to AI Governance Library.

Success! Check your email for magic link to sign-in.

Success! Your billing info has been updated.

Your billing was not updated.