Immersing Ourselves in the Future: Lessons from the DISINFTRUST Pilot
By Susan Abbott
We are living in an era where democracy, truth, and trust are under increasing pressure—and the tools we use to protect them must evolve accordingly. The European Media and Information Fund (EMIF)-funded DISINFTRUST pilot project, implemented by Newtral, Blanquerna School of Communication, and Cardiff University, represents a compelling and timely response to this challenge. Over 18 months, the project delivered a rigorous, interdisciplinary, and human-centered proof-of-concept focused not just on detecting disinformation—but on understanding how and why people come to trust it.
As an evaluator, researcher, and practitioner specializing in information integrity and resilience, I found this project both inspiring and instructive. It advanced an approach that was tightly scoped yet highly ambitious—combining social science research, technical innovation, and media practice in a single pipeline. Perhaps most importantly, it offered a conceptual shift: away from policing individual falsehoods and toward identifying narrative structures and trust dynamics that underpin belief formation in real-world contexts.
This post offers an overview of the DISINFTRUST pilot, highlights from the independent evaluation I conducted, and reflections on what this project tells us about the future of digital information ecosystems.
What Made DISINFTRUST Different?
Unlike many initiatives that target bad actors or content moderation, DISINFTRUST began with a different assumption: that belief is shaped not just by exposure to false content, but by emotions, context, and underlying narratives. The goal was to understand and model the factors that make disinformation resonate—and to develop tools that empower users to recognize and respond to those factors themselves.
Rather than chasing viral falsehoods, the DISINFTRUST project focused on narratives—the broader storylines that disinformation campaigns exploit and reinforce. Through a combination of machine learning, behavioral science, and media expertise, the team mapped narratives across nine countries using over 200,000 fact-checks. They then designed a multilingual, explainable AI model to assess manipulation features such as clickbait, subjectivity, and toxicity, and tested a browser extension prototype to deliver real-time trust cues to users.
As one stakeholder put it:
“The clustering of fact-checks by narrative is powerful. It lets us zoom out from isolated claims to broader patterns—something we need more of if we want to understand influence at scale.” — Data scientist, interviewee
This kind of pattern recognition is essential for tackling disinformation in complex, fast-moving environments. As the EMIF project page notes, the pilot demonstrated “how interdisciplinary approaches can provide a more nuanced understanding of belief formation and user behavior—contributing to better-designed and more transparent tools” (EMIF, 2025). The project was funded with a grant of €221,487.47 by the European Media and Information Fund (EMIF), and conducted by Newtral, Blanquerna, and Cardiff University to pioneer narrative-based trust analytics across nine countries.
A Compact Yet Comprehensive Pilot
Over its 18-month pilot, DISINFTRUST produced the following core outputs:
- Clustering of over 200,000 fact-check entries from nine countries to identify disinformation narratives
- Large-scale cross-national surveys (~2,500 respondents per country) examining belief formation and exposure
- Online experiments testing causal mechanisms behind belief, including exposure to AI-generated disinformation
- Development of a multilingual AI model, scoring content on manipulation features
- A working browser extension prototype, tested in media literacy workshops, offering real-time trust signals to users
As Newtral highlighted in their project summary, the browser extension’s goal was not to flag “fake news” per se, but to provide contextual indicators—such as tone, complexity, and emotional charge—helping users make more informed judgments about the credibility of content in their feeds.
“It’s not about debunking individual posts anymore. The future is about giving people cues and context so they can make better judgments themselves.” — Evaluation reflection
This approach recognizes users as agents, not passive recipients. It attempts to put the power of discernment into the hands of users, improving not just individual decisions, but contributing to healthier information ecosystems overall.
Evaluation Insights
The DISINFTRUST project was evaluated through a qualitative, utilization-focused lens, designed to inform both immediate learning and longer-term scale-up potential. The evaluation was conducted independently and drew on interviews with core team members, external stakeholders, and an in-depth document review.
Key insights from the evaluation include:
- A clear value-for-money proposition: The project achieved an impressive range of outputs with relatively modest funding. As one interviewee noted:
“With limited funding, the team demonstrated how to merge cutting-edge AI techniques with grounded behavioral science to produce something genuinely useful and globally relevant.”
- Strong interdisciplinary collaboration: The team navigated complex research, technical, and practical challenges with a sense of shared purpose. Academic, journalistic, and UX contributors all played equal roles in shaping the design.
- User-centered design: From early media literacy testing to feedback loops across stakeholder groups, the project consistently prioritized usability and transparency over technical perfection.
- Challenges to scaling: While the pilot’s achievements were notable, scaling this kind of real-time, multilingual, trust-oriented tool remains resource-intensive. The evaluation noted specific challenges around consistent scoring across languages and defining subjective features (e.g., toxicity) with reliability.
Still, as a proof of concept, the project stands out. The evaluation concluded that DISINFTRUST met its core objectives and offered a strong foundation for iterative improvement and cross-sector collaboration.
Why Narratives Matter in the Disinformation Fight
A foundational insight of DISINFTRUST is that falsehoods are rarely persuasive on their own. They draw power from the narratives they’re embedded within—stories that are emotionally resonant, culturally salient, and often rooted in legitimate grievances or fears.
By analyzing recurring disinformation narratives, the project moved beyond chasing viral posts and began to map the discursive landscape in which disinformation circulates. This enabled more proactive and preventative interventions—allowing educators, journalists, and civil society actors to understand the logic behind harmful content and to tailor counter-narratives accordingly.
This shift in the unit of analysis—from claim to narrative—represents a significant evolution in the field. It better aligns with how real people consume information: not as discrete facts, but as part of broader belief systems and identity formation processes.
Implications for the Field
The broader field of disinformation, particularly in the donor and civil society space, can take several lessons from the DISINFTRUST experience:
First, the unit of analysis needs to shift: From individual falsehoods to narrative-level patterns. This allows for earlier detection, deeper understanding, and more adaptive interventions.
Second, trust is a dynamic, context-specific variable: Tools and frameworks must recognize how trust operates differently across cultures, languages, and media environments.
Thirdly, innovation doesn’t require massive funding: Agile, well-aligned teams can produce major breakthroughs when they are given space to explore, learn, and adapt.
Finally, partnerships across sectors matter: This pilot succeeded in part because it bridged gaps—between AI researchers and fact-checkers, between academics and media educators, and between design and impact.
Final Thoughts: Toward a More Trustworthy Information Future
The takeaway from DISINFTRUST is not simply that we need new tools—but that we need new approaches. This pilot demonstrated that it is possible to design interventions that are technically sophisticated, behaviorally grounded, and socially responsive while remaining cost-effective and scalable.
It showed what’s possible when narrative intelligence, AI innovation, and public interest values intersect.
Looking ahead, DISINFTRUST offers a model for how journalism, civil society, and technologists can co-create tools that not only detect harm but empower users to build trust, navigate complexity, and strengthen our shared information commons.
If we are to meet the challenges of today’s information environment, we must move beyond whack-a-mole responses to falsehoods. The real work lies in shaping the architectures of trust—one narrative, one user, one innovation at a time.
Susan Abbott, PhD, has conducted the evaluation of the DISINFTRUST project for MJRC.
Photo by Hartono Creative Studio on Unsplash
Support independent media research – your donation helps keep our work open.
Donate