MJRC Expert Chats Series
Simone Benazzo in conversation with Colin Porlezza of Università della Svizzera italiana
Last October the Institute of European Democrats published a policy report that I crafted about the untapped potential of Artificial Intelligence (AI) in the EU’s fight against autocratization: Building democratic AI for the EU: Potential, tools, challenges. Since most of my policy recommendations focus on journalism, I took the opportunity to interview Colin Porlezza, Senior Assistant Professor of Digital Journalism at the Università della Svizzera italiana, to gain some fresh and thought-provoking insights on the hot topic of AI’s relationship to journalism.
We discussed what the EU has (not) done thus far in its legislative effort to regulate AI, the potential positive applications of AI in journalism, and the role the EU can play in transforming AI from an adversary to an ally of independent journalism in the battle to safeguard media freedom and pluralism throughout the bloc. Below, we present the key points from our conversation.
Colin Porlezza: Very complex question, mostly because, when it comes to AI, there are very different possibilities of application, especially in the field of journalism. We can talk about the use of artificial intelligence in both the production and distribution of news, for instance. And, in my opinion, in both cases there are risks and opportunities.
A fundamental problem when discussing a governance approach, or more strictly, the regulation of the use of this technology in the journalistic and media sector, is to always keep in mind that we are not just talking about an ordinary sector. Therefore, on one hand, the use of AI must be regulated, but on the other hand, the media sector also enjoys certain protections relating to freedom of the press and freedom of expression. It is then a very delicate area where conflicts between different rights and duties can arise.
For instance, if we refer to the Charter of Fundamental Rights of the European Union, it is not only specified that the media, as a sector, benefits from freedom of the press and freedom of expression, but the way in which information is disseminated is also protected and defended in specific ways.
So, for example, in the case where AI software was used to generate texts that were then distributed as journalistic content, these would automatically be protected under the rights of freedom of the press and freedom of expression. Very often, we find ourselves in a grey area where we must be careful not to infringe on the rights of the media, especially when we discuss statutory regulation that aims to restrict the use of these technologies.
And, in my opinion, it is no coincidence that if we look at what the European Union has proposed so far regarding the use of AI, especially in the specific field of media and journalism, we find very little.
Benazzo: I see that as a crucial point: how much space are media and journalism given within the EU debate on regulating AI?
Porlezza: As part of a study also financed by the Federal Department of Communication in Switzerland, I examined every single official document that led to the development of the so-called AI Act by the EU institutions: in these documents media and journalism are not even mentioned once. An intriguing finding.
On the one hand, for us who deal with journalism and media, it is always striking to see how media and journalism do not represent a sector of primary importance for the EU. On the other hand, however, this could also be seen as alarming, because we believe that the impact of this technology on media and journalism, especially as regards the distribution of news information, is a fundamental aspect.
Therefore, even in this case, it is not very clear what the EU approach to the topic is.
Of course, the EU seeks to develop an AI Act that is applicable across all sectors, also because it regulates not so much the use of AI, but access to the market. Consequently, it could very well also be applied to the media, not so much because it regulates the use of these technologies, but because this technology must fulfill certain “criteria” to be admitted to the market.
However, for our team, discovering that the media is never mentioned in this normative production was a surprise also because social networks and platforms, which are intermediaries, are mentioned, not that often, yet they are mentioned anyway.
So the “media,” generally speaking, occasionally appears on the European Union’s radar, but almost always as social media, and never in the form of traditional media and even less as journalism. This as far as the European Union is concerned.
It should be pointed out, though, that another “supranational” institution, namely the Council of Europe (CoE) has embraced a somewhat different approach. At the moment, the Committee on Artificial Intelligence (CAI) is finalizing a first draft of the Framework Convention on Artificial Intelligence, Human Rights, Democracy and the rule of law, a legal framework that aims to define the general principles and rules governing the development, use and implementation of AI systems. In addition to this, the CoE, with one of its working groups, the Resilience of the Media group, is developing guidelines for the use of digital technologies in the field of media and journalism. And AI is paid particular attention to. Still, to sum up, even if the CoE has taken more concrete steps in this regard, it seems to be that regulators are still much behind in their effort in regulating the use of AI on and for journalism.
Benazzo: In a phase where EU legislators seem to have little awareness of the potential negative impact AI can have on journalism, is there room to imagine positive uses of these technologies in this field?
Porlezza: Keeping in mind the point I raised above, namely that there are still many gaps and gray areas regarding the use of AI, we can try to outline some potentially positive applications.
There are several studies that underline how the personalization of journalistic or media content does not necessarily represent only a problem or a risk. Said otherwise, some studies on the mechanisms of “news recommenders” have shown that the personalization and recommendation of content does not always end up reducing the breadth or diversity of content that users are exposed to. When used in a certain way, news recommenders can also broaden the spectrum of content consumed by a user.
To give an intuitive example, if a person is only interested in sports, recommendations could be offered only in the sports context. However, by programming the algorithm as Spotify does, which also includes different topics, such as national political or cultural news, the recommendation can also contribute to partially diversifying the user’s media consumption. Thus, explaining to the user exactly how the algorithm works, while also giving him or her the possibility to opt out (or even better, an opt-in), could potentially lead to a more conscious use of AI technologies.
Benazzo: In my briefing I advance some concrete recommendations to turn AI into an ally of media freedom and pluralism. The first one concerns the fight against disinformation. Which possibilities do exist in this regard, in your view?
Porlezza: There can be multiple applications in this sense, and I know of many initiatives that deal, for example, with fact-checking, automated or otherwise, in which many editorial offices actually try to combat disinformation through AI technologies.
So yes, the potential might be there, but in my opinion, there is a problem of perception of the technology. Readers often see this technology not as a tool to fight disinformation but as the main cause of disinformation, precisely because AI, through bots and social media, has been used for this purpose. So, the perception is that this technology is generally more the cause than the solution to the problem of disinformation.
We should then also think about how to change the perception of this technology among users. That is, investing in “AI literacy,” an evergreen theme, which is always mentioned when we talk about users. However, in my opinion, it [this theme] remains important and relevant precisely because, if we actually want to exploit the potential of this technology to fight disinformation, its perception among users must improve significantly.
Benazzo: Another part of my policy briefing revolves around local journalism, which has been in a crisis for years all around the EU. Could AI help fight against the emergence of news deserts, areas and regions where no local newspapers addressing the needs of local communities exist any longer?
Porlezza: Admittedly, I am always rather skeptical when I hear proposals about the use of AI at a regional or local level because the use and implementation of this technology is often faced with many problems related to resources, not only economic, but often also human. Local media and regional organizations rarely have staff trained in the development and implementation of these technologies.
However, I see an important role for public service, which very often not only should, but also has the obligation to cover what happens at local and regional levels in a given country. Here we also go back to the discussion we had before about the targeting of news through news recommenders. If a public service has the possibility to also offer personalized content in geographically delimited terms, therefore news on a regional or local basis, this could be immensely useful for users, precisely because they could obtain and receive information on what is happening close to where they live.
There are already cases where such possibilities are being exploited. In Germany, at Bayerischer Rundfunk, they have developed an application for news broadcasting via radio news, which gives the user the possibility of setting the area in which he or she wishes to receive news. For example, a person who lives 150 kilometers from Munich can choose to receive all the news related to that specific area. This way you receive more personalized radio news.
This would not only be an additional service but could also be a largely positive application of personalization and recommendation of content that would thus cover precisely those territories where perhaps until recently there was still a local newspaper, which young people don’t even read anymore, or maybe a prestigious newspaper that had to close due to financial problems.
In my opinion, in this case public services have a responsibility that should not be underestimated. And, normally, these actors always have a charter of values, linked to a license, which they are required to abide by. So, there would also be possibilities to direct the use of these technologies, making them more transparent and perhaps also forcing providers to update their operations after a certain number of years or months.
Briefly, when we think about local journalism and the fight against news deserts, I believe that public services play a fundamental role.
Benazzo: On this topic, do you think that the EU could provide local newsrooms with the resources they need to untap the potential of AI, thus contributing to the “democratization” of access to AI in the realm of journalism?
Porlezza: Yes, in my opinion, the idea could work, provided that the technologies are developed by the public service, possibly in partnership with private actors.
However, the terms should be well negotiated with third parties. If, for example, the public service develops technologies along with external providers, it could make them available free of charge to other businesses or media companies, but this entails having first received the explicit consent of the third party. These are very often private commercial entities, which also understandably have an interest in selling the software or technology they produce. Therefore, assuming that this aspect can be negotiated, it should be put in black and white that the public service would not only have the right to use these technologies but could also license other local and regional authorities to use them.
To give you a first-hand example, in the latest report of the Swiss Federal Media Commission of which I am a member, we argued that, to guarantee future development of the media also at a technological level, there should be public resources that allow all media, be they public or private companies, to access and collaboratively develop infrastructure and technologies through public-private partnerships, which could then be made available to all. We didn’t refer specifically to AI, but to infrastructure and technologies in general.
Benazzo: In the EU, an important implication of granting free access to AI technologies to all newspapers and media that wish to learn how to use them would be that the control of the national government would be eluded. As the cases of Hungary and Poland remind us, national governments capture media outlets, making independent journalism difficult. In this way, however, EU institutions would interact directly with the media actors, bypassing the oversight of national authorities.
Porlezza: That’s a plausible scenario. If, as often happens in EU tenders, huge resources are made available, it could also be stated as a condition that the technology developed must necessarily be open source. By doing so, EU-funded technologies could be freely transmitted and exploited by others, too. Thus, this could be an additional incentive to motivate the media to participate in these projects.
Still, I don’t know how feasible it is in the short and medium term, I’m a bit skeptical. Nonetheless, if the goal is to democratize these technologies, or access to these technologies, then yes, this could be a step in the right direction.