Nym: Global Misinformation & Disinformation Policy Research
2023
Abstract
Alongside product work at Nym, I led a global policy research program examining how different governments address online misinformation and disinformation (MDM). I mapped out four distinct regulatory models that shape platform liability for content and user speech, and I built a comprehensive U.S. state-by-state database using a seven-category framework to compare proposed bills, enacted statutes, and enforcement actions. My methods combined open-source intelligence (OSINT), direct analysis of legislation and legal rulings, and quantitative coding of policy attributes. I designed the research database in SQL with a transparent coding protocol. Then I published an interactive, open map that allowed both specialists and the general public to explore this patchwork of laws visually. The underlying dataset was released openly with documentation—a contribution that was featured in Forbes. In the analysis, the four regulatory models were illustrated with country examples: for instance, the United States as a “safe harbor” approach (with Section 230 shielding platforms), the European Union and UK using a conditional immunity or “duty of care” model, China enforcing strict liability and content controls, and regimes like Russia or Belarus criminalizing online speech. I also highlighted nuances, such as how a well-intentioned democratic law (Germany’s NetzDG, aimed at illegal content) can inadvertently incentivise over-removal of posts by platforms.

The Challenge
This project examined the evolving tension between governments and platforms, extending beyond static law to explore real-world flashpoints. I documented high-impact events – from nationwide internet shutdowns and platform bans to headline-grabbing regulatory fines – to see how policy on paper translates into practice. A special focus was on how platforms are handling the new wave of AI-generated content (deepfakes, generative political ads, and other synthetic media). I analysed platform policies on labelling AI content, technical provenance measures, and the effectiveness of user reporting and appeals systems. These issues were examined in both election contexts and everyday scenarios, such as health misinformation and financial scams. By the end, our deliverables reframed the usual “free speech vs. safety” debate as a practical governance and design challenge. We provided Nym’s product, security, and communications teams with a shared vocabulary and visual tools to navigate the content landscape – essentially turning a complex regulatory patchwork into a set of actionable insights.
Behind the scenes, I built a normalised SQL schema to systematically capture states, policy instruments, enforcement cases, and categories from our seven-category taxonomy. Each entry in the database was linked to primary source URLs, dates, and researcher annotations with confidence levels, creating an auditable trail for each data point. I conducted descriptive analytics on this dataset and ran regression checks to ensure we didn’t over-attribute effects (for example, examining whether political party control was a significant predictor of certain bill features). The data pipeline fed directly into the interactive map, which allowed users to drill down to each state’s records and read the underlying laws or proposals. This technical workflow demonstrated my ability to design databases, conduct quantitative analyses, and present data through intuitive visualisation.
I also translated our findings into decision-making aids for internal teams. We developed a concise “compliance glossary” explaining key terms and legal concepts for quick reference. I compiled a model-by-model liability risk matrix to help Nym’s leadership understand the exposure under each regulatory regime, and I set up a simple intelligence-sharing cadence (a brief monthly update) that product and security leads could use to stay ahead of emerging policy shifts. Additionally, our MDM research explored strategies for handling AI-generated content on platforms. I surveyed how major platforms were labelling or banning deepfakes, what verification or provenance tools were in use, and how policies differed for election-related disinformation versus everyday misinformation. We highlighted implications of these AI content policies for user experience, public trust, and Nym’s own community engagement strategy.
Conclusion
Finally, I packaged the complete MDM dataset and analysis for open access, including a detailed codebook and methodology guide, so that external researchers or journalists could replicate or build upon our work. This demonstrated my commitment to open data principles, reproducibility, and transparent communication with stakeholders. The open resource we created has been used to inform policy briefs, internal training sessions, and market-entry assessments for Nym, extending the impact of our research beyond the company.


