I am not great at writing regular blog posts going deep into specific issues. But as someone who follows a wide range of content—especially on AI, digital regulation, and technology—I often come across articles, reports, and research that are just too interesting not to share. That’s why I’m starting a new post series called Interesting finds. Here, I’ll (hopefully) regularly gather and highlight the most thought-provoking content I’ve discovered, adding a bit of context so you can go deeper if something catches your eye. I hope you’ll find something useful in these curated selections.
AI and Governance
The European Union continues to lead in shaping AI governance. The EUTA-GPAI report lays out key principles and actions for Europe to maintain competitiveness while ensuring responsible AI development. This document underscores the tension between fostering innovation and maintaining strict oversight.
The IAPP’s report on the AI governance profession offers a snapshot of the emerging roles, responsibilities, and career paths within organizations striving to implement ethical and compliant AI systems. Thes is also echoed in academic analysis such as “Regulating Generative AI”, which explores how jurisdictions worldwide are grappling with new frontiers in automation and creativity.
Meanwhile, EIOPA’s consultation on AI governance looks at the growing complexity financial supervisors face as they seek to balance risk management with technological opportunity. Sidley’s summary of EIOPA’s opinion shows that sector-specific regulators are increasingly involved in interpreting how general AI rules apply within niche industries.
Academic papers “The Law and Artificial Intelligence: Regulating Autonomous Systems” and “The Regulation of Artificial Intelligence: A Comparative Analysis”, analyse regulatory trends by region and highlighting where consensus—and disagreement—lie.
Meanwhile, this CircleID article questions whether traditional multi-stakeholder models can survive the rise of state-driven digital policy. As these debates unfold, they will set the direction for innovation, economic growth, and democracy itself.
AI and Copyright
Legal uncertainty around AI and copyright is growing more visible as generative models become central to creative and commercial work. The European Parliament’s study on generative AI from a copyright perspective highlights open questions about originality, ownership, and liability, while the first AI copyright case referred to the CJEU signals that European courts will soon take up these issues directly.
This is far from just a European story. The U.S. Copyright Office’s recent report examines whether and how works created with AI can be protected under U.S. copyright law, emphasizing the importance of human authorship. Meanwhile, the UK government’s consultation invites public input on how national law should adapt to AI-generated content, reflecting similar debates globally.
Commentary from legal experts explores the complexities of authorship when AI systems are heavily involved. For example, this Technollama post looks at current disputes over who—if anyone—should be credited as the creator when machines do much of the work. Practical questions also arise in software development: as ZDNet discusses, when tools like ChatGPT generate code, the lines of ownership and responsibility are often unclear.
In seeking new perspectives, you can look to earlier digital challenges for guidance: a Dutch case about RSS feeds is being re-examined for insights into the boundaries of copyright in the age of automation.
AI and copyright question still remains unsettled—and that answers will likely come from a patchwork of court decisions, legislative reforms, and practical experimentation across jurisdictions.
AI and Safety
International alignment remains a work in progress. The UK International AI Safety Report 2025 benchmarks emerging approaches to AI safety across national borders and stresses the need for harmonized standards. Complementing this, the NSA’s joint guidance addresses data security best practices in an age where model integrity is as important as privacy.
Incident reporting is another cornerstone of safe use of AI: the OECD’s framework aims to make it easier for stakeholders to share information about AI failures or near-misses—an essential step toward learning from mistakes and preventing harm.
At the end
The regulatory landscape for artificial intelligence is becoming both richer and more complex. Europe’s comprehensive approach, the rise of new governance professions, and the push for harmonized global standards all signal that AI oversight is rapidly maturing. However, legal uncertainty remains—especially around generative AI and intellectual property—but the work toward clearer rules and responsible frameworks is undergoing. As international organizations, academics, and industry all contribute to shaping these systems, the coming years will be defined by how well we balance innovation with risk, and how effectively we coordinate across borders.