Last night, I had the chance to speak at Reimagine Our European Media Future, a meet-up during SXSW London 2025 hosted by EIT Culture & Creativity, the City of Amsterdam, and New Dutch Wave in collaboration with VPRO Medialab, CoECI, CIIIC, and ROM Utrecht.
The talk I shared, included in full below, is both a positioning and a call to action. It comes out of ongoing conversations at VPRO Medialab, the NPO and the Public AI Network.
My hope is that this piece doesn’t just describe the terrain but opens space to imagine differently. To resist the smooth logic of optimization and instead ask: what kind of intelligence do we want to live with?
Thank you to everyone who was there, and especially to my fellow speakers for their thoughtful and compelling contributions. Below the talk text, I’ve added a short reflection on one of the questions raised during the Q&A that stayed with me.
As always, I improvised a little during the talk. I’ve done my best to include all the tangents and side thoughts I threw in on the spot, but I can’t promise a perfect transcript. Think of this as the director’s cut. Slightly more coherent, but still with all the feelings intact.
The Story is the System: Public AI x Public Media
Hi everyone, my name is Abel Enklaar. I’m the editor-in-chief of VPRO Medialab, the research space of VPRO Broadcasting in the Netherlands. At Medialab, we explore how emerging technologies open up new ways to tell stories and connect with audiences. Since 2015, we've been experimenting with everything from virtual reality to vertical video to generative AI.
This presentation is based on an essay I’ve been writing over the past few weeks. Next to my work at Medialab, I’m also a member of the Public AI Network, a kind of think tank made up of policy experts, researchers, and technologists from around the world who are concerned about the direction AI is taking.
The foundational systems on which we built public media are shifting beneath our feet. I felt the urgent need to formulate a sort of positioning. This is a starting point for deeper reflection, shared strategy, and bold experimentation within and beyond the public sector.
We’re used to talking about AI as a product. A new tool to adopt or resist. But this isn’t just about tools. It’s about infrastructure, the kind that quietly reshapes how we speak, remember, decide, create, and relate to each other. That’s what this is about. And that’s what I want to explore with you today.
Something big is shifting in how we relate to media. We often talk about AI as if it’s just a new tool. Like a hammer. A calculator. Something we use. But that metaphor is outdated. It doesn’t match what’s actually happening.
AI-systems are starting to shape the media we consume, the stories we hear, the way we learn and think
That’s what I mean when I say: AI is becoming a mesh. Not one app. Not one interface. But a layer that surrounds us. Recommending, rephrasing, translating constantly. And it’s invisible most of the time.

Earlier this year, at the Mobile World Congress (a big conference for the telecommunications sector) T-Mobile announced it’s building a phone that’s AI-first. The most important interactions on your phone happen through a conversational interface. You just tell the phone what you want, and it does it for you. From the basic tasks we already have, like sending a quick message or finding a location. To more complex generative tasks: summarising notes, doing live translation, and even generating whole podcasts on the fly. Pulling live information from the web, writing a script, and generating synthetic voices.
It sounds amazing. And in many ways, it is from a consumer perspective. More immediate media. That better takes into account my access needs, my personal likes and dislikes.
Imagine a device that knows you so intimately it makes sure the video you’re watching turns into a podcast the moment you step into the car. It creates rich audio description for the blind. A student struggling to concentrate on a long lecture? The AI automatically turns it into an interactive debate maybe with some gamified elements that really help the material stick.
In a way, it’s a utopian vision. Delivering the best, tailor-made content for every individual.
But it also changes something fundamental.
What happens to the connection between maker and audience?
We’ve already seen how social platforms can disrupt that bond. At first, it felt like a revelation. Social platforms democratizing media. Timelines as a reflection of your interests and followings.
Until the endless scroll took over. Discovery became the main driver. You no longer connect with your community or the creators you follow. Those likes and follows became datapoints for the algorithm to decide what to show you next.
Now, it’s moving one step further.
The AI becomes the relationship. It curates what you hear. Not because it knows what’s true, but because it knows what you’ll listen to.
And if we don’t intervene, we’ll lose something essential. Not just control. But connection.
What happens to public institutions, to broadcasters, libraries, schools, museums in a world where people no longer come to them, but instead talk directly to an AI trained on their work?
The archive becomes training data.
The institution becomes invisible.
AI listens to us, so the public doesn’t have to.
It paraphrases, condenses, remixes, revoices.
And that remixed version is what people come to trust.
It feels like magic. But it’s not neutral.
Because these systems are not built for collective flourishing.
They’re built for scale, surveillance, profit, and control.
And if we let these systems fully mediate our cultural lives,
we risk losing the possibility of a shared public reality.
Where once we had shared sources of truth like the evening news, or a trusted teacher. Now we have AI-generated answers. Personalised, hyper-tailored, optimised for you.
But culture is not something to be optimised.
Culture is messy. Collective. Argumentative.
It’s the stories we fight over, not just the ones we prefer.
And there’s the truth:
if everyone lives in their own perfectly optimised media bubble,
we no longer live in the same world.
This is where the case for Public AI begins.
Because if AI becomes the system through which we access culture, information, and one another, then it matters who owns that system. It matters what values it reflects. And it matters whether we, the public, have any say in how it works.
Right now, most of the infrastructure shaping AI is private, centralised, and optimised for profit. The logic of these systems is extractive. They take our data, our stories, our cultural memory and turn them into products.
We need to stop seeing that as inevitable. Because there is another way.
A Public AI approach asks: what would it mean to design these systems differently? What would it look like to build models in public institutions, tuned on local data, steered by public values?
We don’t have to accept the systems we’re given. We can build alternatives.
And not just because they’re more fair. But because they make better sense for society. Systems that are accountable. Transparent. Plural. Governed with care.
That’s the project of Public AI. It’s not just about regulation. It’s about imagination. Infrastructure. And solidarity.
And we need everyone in the room.
Because this goes far beyond culture.
What if AI becomes your gateway to healthcare, to education, to government services? To the stories you hear and the decisions you make?
I don’t want to live in a world where your ability to express yourself, to stay healthy, or to access culture depends on whether you can pay a subscription fee to a Silicon Valley broligarch.
We need to treat AI as civic infrastructure. And we need to build it like we mean it.
So, I want to leave you with a question:
If AI is going to shape public life, then what’s your role in shaping AI?
Because we all have one. You can connect across institutions. Influence policy. Fund alternatives. Rethink your tools. Challenge your assumptions.
But above all: don’t settle for the role of user or consumer. You are a citizen. You are a co-creator. You have the right to shape what AI becomes.
If we want a better system, we need to start by telling different stories. Stories that don’t optimise for eficciency. Stories that imagine new ways of being together. Stories that tell us a better world is possible.
Because the story is the system.
Audience Q&A Reflection: we should fight for disagreement.
At the end of my talk, someone asked whether it’s realistic to expect we could ever agree on what public AI should look like. I don’t believe agreement is the goal.
What draws me to a public approach to AI is that it makes space for disagreement. It does not require everyone to share the same worldview. Instead, it gives local communities the ability to shape systems that reflect their own needs, histories, and values.
I shared an example during the talk about the way coffeeshops around the world have started to look and feel the same. I called it the IKEA-fication of coffeeshops. That kind of sameness might seem convenient, but it erases difference and reduces cultural texture.
Public AI could offer something else entirely. It could be the infrastructure that helps difference persist. Not a single intelligence that speaks for everyone, but a framework that supports many ways of thinking, living, and creating.
All images in this post were generated by me using DALL·E, inspired by the visionary work of radical Italian architecture group Superstudio. Their project The Continuous Monument, developed between 1969 and 1971, remains a huge source of inspiration. I’ll likely devote a future post to why their ideas continue to resonate so strongly with me.