US pressure has complicated efforts to regulate AI, but Europe must act now

AI threatens the ability of news publishers to generate revenue from their content and, by extension, undermines the media pluralism essential to democracy

US vice-president JD Vance at the AI Action summit in Paris last February. Photograph: Ludovic Marin/AFP via Getty Images
US vice-president JD Vance at the AI Action summit in Paris last February. Photograph: Ludovic Marin/AFP via Getty Images

Scarcely six months after the world’s first comprehensive AI legislation entered into force, heads of state, CEOs from companies such as Google, Microsoft and OpenAI, senior government officials and computer scientists from around the world gathered in Paris last February to attend the AI Action Summit.

“I’m not here this morning to talk about AI safety,” US vice-president JD Vance said on his first overseas trip. “I’m here to talk about AI opportunity.” Taking aim at the Act, he set out the new administration’s position: “We believe that excessive regulation of the AI sector could kill a transformative industry just as it’s taking off.”

At the centre of the Act is a voluntary code of practice for general-purpose AI models, which is being drawn up by independent experts and based on submissions from industry stakeholders, academics, civil society, representatives of member states, as well as European and international observers.

For AI models that may carry systemic risks, it sets out how providers should assess and mitigate these risks. It is also intended to spell out how developers should respect intellectual property rights when training AI systems. But finalising the code has become a slow-motion tug of war.

READ MORE

Behind closed doors, some in Brussels acknowledge that US pressure, and growing nervousness inside certain EU capitals, has complicated efforts. Poland’s recent call for a “stop-the-clock” pause on parts of AI Act implementation reflects this unease. Warsaw’s intervention is widely seen as an early warning sign: some member states fear the EU may be moving too far, too fast, or in ways that could trigger trade disputes to add to those it can already ill afford.

Meanwhile, Europe’s creative sectors feel increasingly abandoned. Artists, musicians, writers, journalists and other creators see their work ingested to train generative AI systems, systems that can now produce convincing imitations of human-created music, images and text in seconds. This is done without their consent, credit or compensation.

The European Commission insists it is bound by existing copyright law, arguing that the AI Act cannot go beyond the EU copyright directive which provides for an exception whereby copyrighted works can be used for scientific research purposes without permission from copyright holders. As there is no basis for them to object, there is no basis for copyright holders to request payment for the use of their material. Although that directive was only passed in 2019, copyright holders say nobody had AI in mind at the time. Many creators placed their hopes on the AI Act delivering accountability for how models are trained on their works. As that possibility recedes, we now hear promises that a broader review of copyright rules may come next year.

In the absence of legislative intervention, it will fall to the courts to determine whether AI companies should be allowed to train models on copyrighted works without permission, and if not, what rules on transparency, licensing or opt-outs will ensure creators are fairly treated. In the US, a similar, so-called “fair use” exemption is the subject of increasing litigation, including a New York Times case against OpenAI which is the most high-profile example. Indian news outlets and book publishers who say the firm uses their content without permission to help train its ChatGPT have taken a high court challenge that could reshape how the sector operates there.

New York Times sues OpenAI and Microsoft for copyright infringementOpens in new window ]

The Irish Times view on the New York Times vs OpenAI: rights and wrongs of artificial intelligenceOpens in new window ]

When it comes to the media sector, the stakes are not just economic but democratic. AI-generated content stripped of attribution threatens the ability of news publishers to generate any revenue from their content and, by extension, the media pluralism indissociable from a democratic society. While some larger, stronger news publishers, including the New York Times are signing licensing agreements with AI, others who lack their heft are being pushed out.

The Reuters Institute’s annual Digital News Report, which was released last week, found that a growing number of people are using AI chatbots to read headlines and get news updates. While only 7 per cent overall say they use AI chatbots to find news, that number rises to 15 per cent of under-25s.

The commission deserves credit for having had the courage to propose regulation to mitigate the risk posed by AI, as pointed out by respected computer scientists both in academia and industry. But the legislation fails to determine definitively how the balance should be struck between the development of European AI, trained on European data so as to reflect European values and culture, and protecting Europe’s creative and media industries.

The code of practice was meant to offer at least partial answers. Today, it risks becoming a fragile compromise text pulled between Silicon Valley warnings of regulatory overreach, and rights holders warning of economic erosion already under way.

We asked ChatGPT to write like Marian Keyes, John Boyne and Paul Howard. Now they rate the resultsOpens in new window ]

The European Parliament sits uneasily in the middle. After a hard-fought compromise, it is naturally inclined to defend the legislative package it passed so recently. But we are also being asked by citizens, creators and member states alike to fill the legal grey zones that are already opening. And to do so in the face of the prevailing zeitgeist, which is one of deregulation and simplification.

The window to get this right is narrowing. Copyright is not a mere technical issue. It strikes at the heart of Europe’s cultural sovereignty and democratic resilience in the AI era.

Michael McNamara MEP is co-chair of the European Parliament’s working group on the Implementation and Enforcement of the AI Act