INTERVIEW | Invisible Work and Invisible Harm: European Regulations Are Not Censorship but the Protection of People from Harm - Eesti Rahvusraamatukogu
Opening times
Photo: Aron Urb

INTERVIEW | Invisible Work and Invisible Harm: European Regulations Are Not Censorship but the Protection of People from Harm

26. January 2026

Prof. Marju Himma in conversation with Dr. Anna Antonakis

The harm caused by platforms is not limited to illegal content but extends to their very design and business models, which amplify anger and polarization. Protecting people from such harm is not censorship but a duty of democratic systems, argues Dr. Anna Antonakis, Senior Researcher in the field of Social Design at the University of Applied Sciences in Bern. In an interview with Riigikogu Toimetised, she stresses the need for transparency and civil society involvement, and identifies the development of critical digital literacy and the safeguarding of moderators’ mental health as key factors in reducing harm.

Let’s start with a very broad introduction, especially for people who perhaps are not familiar with the challenges of harmful content online and it’s also moderation. Could you please bring examples of harmful content that needs moderating and what does moderating in this content entail? 

I’m an interdisciplinary scholar. I come from studying the uprisings of social movements in 2010 to 2011 in Tunisia. That was later called the so-called Arab Spring.

Back then, I really came from an optimist view and also from a more optimist empirical research where people could use social media to circumvent media censorship, including internet censorship thanks to different proxies to expose police brutality by sharing fotos, videos etc. 

This is now almost 15 years ago, so things have changed quite a lot and not saying that I’m coming from an optimist perspective to a pessimist perspective, because we also have always investigated the ambivalences of media technologies. With regards to social media or information communication technologies I pursue a social technological approach, that is the technology cannot be analyzed in isolation from society and vice versa. They co-constitute each other. And with regards to your question – what is harmful content? I mean, there is different harms that are being distinguished in literature, but also in platforms and also in policy documents, such as the EU’s Digital Services Act (DSA). So “harm” can mean harmful in terms of illegal. It’s illegal content, for example, exposing (sexualized) violence, pictures taken – maybe also manipulated with AI- and circulated without consent.

It can also be the selling of weapons, drugs and illegal materials of any sorts. Here, definitely moderation is needed and we see also that the platforms continuously update their policy regulations. For example, youtube has updated their firearms policy, stipulating that content providing instructions on how to remove certain firearm safety devices will be deleted.

That’s harmful and illegal content. But also harmful can be discrimination, humors, mocking of specific group, minorities etc. Harm and discrimination online is complicated to define – also because it is most often not put in legal text. 

There’s a lot of work and scholarship obviously on hate speech and how hate speech also translates into everyday lives. Others investigate what can count as content that radicalizes or that can lead to radicalization of people.

I’m engaging in this type of research from a gender and intersectional perspective –being interested in the radicalization of men and masculinities. Here, I am not only looking at the content, but also at the design behind the platforms that maybe show you a certain type of content, but then bring you through ranking and other mechanisms to more radical content. 

So, I’m interested in how these mechanisms work and I think we need to highlight the lack of transparency when it comes to understanding what mechanisms are employed by the platforms. 

To come to your second question, what are the mechanisms that are at work? There are different mechanisms. A huge problem is with regards to, I’m saying big tech, and I mean like more the social media platforms from the five biggest technology companies right now, including Facebook, X, etc., they often do not really fully disclose what moderations they are using.

And that is, I think, also a big problem, especially for us researchers, because that does not allow us to actually understand dynamics between – as I said before – between content, social media platforms and also dynamics in a society such as social movements.  

And with regards to what mechanisms, I think there are different issues. We agreed that we need content moderation mechanisms to moderate illegal and harmful content. There are automated content moderation, but what I’m also in particular interested in also in the frame of maybe a critical look at media literacy is the bias in automated content moderations. We see also a bunch of studies there and that is also very important despite the issues with regards to access to data. The bias of automated content moderation is one issue. A second issue that I’m interested in are  human moderators.

Here we also have thankfully more and more work that looks at “so-called “ghost workers” or data workers, workers that work in very problematic conditions whose labor is being exploited. For example, the data workers inquiry is an important participatory research initiative that is collecting the perspectives of data workers

I’m coming back to what you already mentioned regarding the Digital Services Act, the DSA, what you have introduced. What changes does it actually bring to moderating digital content, in your opinion? Because DSA introduces obligations for very large online platforms or VLOPs. But in your view, does it shift enough responsibility onto the tech companies or should EU or national governments also play a bigger role in this? 

That’s an interesting question. And I also wonder what your opinion is about that. Because, I mean, also, I think the Digital Services Act, it’s a new legislation and it’s also interesting how it’s being and will be translated and implemented in a different national contexts. So the question was, I think it’s a shared responsibility. A critical question that remains is really, who can use these policies? How are they being implemented? For example, employing so called “trusted flaggers” special entities under the DSA that evaluate potentially illegal content based on their expertise offers important opportunities for, i.e. civil society with expertise on gender based violence online.

I think what is very important is also to have a more broader look, a more holistic look at society as a whole and strengthen knowledge around media, strengthen knowledge around moderation, and policy can only be one element. It’s an important element, and I think also with regards to current debates, with regards to the current US government’s discourse and the tech platform that looks at these regulations as censorship, I think it’s important that the EU, first of all, also keeps this legislation in place, defend it and makes it understandable. We need also, I think, more communication around this. 

But regarding the responsibility of the platforms, they’re really global and states and EU has borders. At the same time platforms are global. Is it even possible to say who is responsible for what? 

I think platforms are responsible for their design. That is something that we also see not enough yet in the legislations, in the policy regulations.

The design of these platforms – and we know this also now – and it becomes more and more transparent, is oriented towards engagement, attention and ultimately profit. That is their benefit, and that harms society as a whole.  Content that brings more clicks and keeps people engaged on the platform is usually also harmful or distressing content, content that speaks to anger and is of the nature of more polarizing. And that’s a huge issue and that is the responsibility of the platforms. The political legislators are trying to somehow create or protect also a space where other virtues and values are more important.

I’m a bit going to continue before we turn to media and information literacy. I’m going to continue a bit more with the harmful content and especially with generative AI and all new forms of online manipulation emerging. What new challenges do you anticipate in the, for example, for the next five years? 

We see that AI is getting better in manipulating images, but also manipulating video and audio. And how can we equip a society or societies with the tools, the needs to detect that, what is real and what is not real? I think regulations play an important role here again, outlawing straight away technologies that are designed to potentially harm human dignity.

When we look into the broader history also of communication studies that has been quite an issue for also longer time, to understand what realities are being reflected, what is being framed by the camera, by the editing of an interview and so on. It’s also kind of an old challenge somehow. But I think these challenges are much, much more complicated nowadays. 

First with regards to the quantity of the images and second with regards to the content fabricated by AI. Also to create more harmful content, to create anger, to create emotions. Before we had the fake news in text, and now it can get backed with videos, with images that gain high visibility and resonance. And I see this as a huge problem. And it’s also a huge problem in conflict contexts.

I said earlier, I come from the study of emancipatory movements, social movements, and democratic movements. When we look at conflict parties that can also use these images, and manipulate images in sensitive times,–  maybe an image in a specific locality means something different than in another one and can be planted or placed with a specific idea of creating unrest or anger – I think this is a major challenge and I think we need to equip our society, in particular also journalists, to detect and to have that critical view, but we also need to hold platforms accountable for, what I would call their political design of content moderation, that might prioritize one conflict party over another.

But your claims can be contested by saying that if Ai has created some disinformation or misinformation, it can also be moderated by AI. What do you think about that – AI moderation?

I think AI moderation is always as good and balanced as is the infrastructure and context it has been created and we have to analyze the actors that actually have the resources to own, create and develop. There are pro social algorithms that aim to foster peace and detect misinformation. The question is where is it being applied? Is it being applied everywhere, who can apply it.

I would always prefer to have more sustainable solutions, more long term solutions as education. Obviously we need to develop AI, but it needs to be open source and implemented for the masses and for a societal good.

What do you think will the human moderation moderation remain or will it be substituted by AI?

This is a decision made by the VLOPs without oversight, and that’s also an issue. These are mayor questions already impacting democratic systems worldwide. For now the human in the loop – the moderators – remain very important, even though their work remains invisible and we publicly believe it is all “automated” or “AI”. I think this model of combining AI moderation and human moderation will remain. Also, because it’s cheap. It’s cheap with moderators being in the Global South. If this exploitation continues, I don’t see that there will be a replacement anytime soon.

Since one of your research strains has been on the unseen or the invisible moderators, so to say, why is this work so invisible and what consequences does that invisibility have for moderators themselves as human beings and also for society at large?

This is also what I look at as invisibilized forms of violence because I think we see how violent structures are being reproduced through the invisibility of certain moderations. We cannot pinpoint them. The first is the technological stuff. It’s like shadow banning as a form of ranking or deleting. I mean, the content is not deleted. It’s just like ranked low and it’s not visible.

There is disregards feminist content, disregards also mental health content, etc. And the other form of invisible moderation is exactly that when we use our media, most people don’t really know about the conditions on how this product, if we look at timelines or if we look at the curated feed, as a product, the human labor behind it is invisible. If people really knew what they are consuming or what is also behind, maybe they would also refrain from it. I think knowledge is power here as well.

As for the society, this means that it would raise the media and information literacy also to be knowledgeable about moderation. But how could that be achieved? I mean, already starting with in schools, right? Making students, pupils understand who is producing the content or who’s designing it, because content moderation is also in their interplay between humans in the loop. The algorithmic content moderation is a design question. So how is their feed designed and what does it mean.

I would definitely put that already in the school curricula. And I mean, there is also a bigger discussion on when and how to teach students when and where should they use their mobile phones during classes or not.

I think we have an important discussion there but it focuses often too much on the technology and on the skills with regards to technology. We need more critical digital literacy. That means also bringing in this whole backbone of how media is produced. And that is one important part. The other is what also we see in the critical digital literacy studies, the importance of keeping the humanities alive and continue teaching kids and students the importance of reading, the importance of literature, the importance of the arts as a part of media literacy.

Media literacy and digital literacy is not only concerned with the tools and the skills, but also first all the knowledge behind of the production with regards to the exploitation of human labor and natural ressources alike, the context, and also just having different ways to see the world. Giving value to creativity and connection in our societies.

You very well mentioned in the very beginning of our interview that media and information literacy is outdated towards digital platforms’ harmful content. What do you mean by that? You already mentioned some bits and pieces, but what does it mean? Is it what we teach at schools, how to read newspapers and how to check facts? Is this outdated or are we lacking something.

I meant that the moderation mechanisms – in particular the algorithmic one, the automated – are part of media literacy to understand and investigate the biases of moderation.

But to come back to also your question, is there anything that is lacking at the moment or where can we improve? I think really the first is to not only look at the skillset, but also, again, see us more in the broader socio-technological environment and also bring in the curiosity. I think curiosity and creativity are major. It’s not a skill, so we focus very much on skills and we need more the critical thinking part in media literacy education.

We also, we need first teaching of the teachers. Everybody who is in contact with others: we need to speak of teachers, but we also need to speak of psychologists, for example.

We know that now a lot of people use AI and chat GPT for discussing their mental health issues. So we need psychologists and psychiatrists as well to have that as part of their education and to also when they interact with patients and people who seek help that they can tell them, okay, you can use the tool for this and that, but maybe not for this.

So, it’s really like in different professions in particular, also healthcare professions, teaching professions, where we need to streamline a critical media literacy.

Perhaps any recommendations for policymakers as this journal is for the politicians and policymakers in the first hand.

First of all, really take bias seriously – bias in content moderation, in the automated content moderation. AI cannot be the solution always to include. Bring in civil society organizations to understand and learn about actual bias and experienced harms, but alse potential solutions in our datafied environment.

I think in the development of the DSA, there were also hearings with civil society organizations but to also really have channels implemented for that as they know better, they are more in touch with users.

And then the Digital Services Act, the Digital Markets Act, the AI Act are all important instruments and I think the legislators should not back down but should stay strong and make clear that this is not censorship when you protect people from harm. That’s also the duty of democratic systems.

And financing research, critical research in the field of media information literacy, interdisciplinary research.  We didn’t speak about disciplines a lot. I come from political science background, then brought in communication science. And now I’m more in design research because it opens questions on how platforms how moderation systems are designed. In any case,  we need interdisciplinary research. These type of questions can only be addressed and studied with AI experts, IT experts and others on board.  We need teams that are interdisciplinary for research that must be publicly funded because otherwise we will have the private sector who is funding the development of technologies for their own benefit.

Dr. Anna Antonakis was speaker at the fourth annual Estonian Media Literacy Conference Screens and Souls – Life in the Digital World.