
Tanja Ellingsen specializes in peace and conflict-studies, extremism and terrorism, and hybrid threats, conspiracy-theories and disinformation as threats to democracy, and has published several articles and book-chapters on these topics. She also teaches about international conflicts, terrorism, hybrid threats and communication during crisis both at the bachelor and master level at Nord University in Bodö.
Tanja is currently involved in a research project on hybrid threats towards Critical Infrastructure (INFRAPOL) led by NUPI and a project on Democracy and Dictatorship funded by NFR. She has recently also been engaged in a research collaboration with Croatia and EU Hybnet on Foreign Information Manipulation and Interference (FIMI).
What do today’s FIMI actors look like, and how do state and non-state actors differ in terms of goals, methods, and resources?
First of all, and as I said in the talk I had, we are now in a situation where Foreign Information Manipulation and Interference (FIMI) – has really grown into a major challenge. What we’re seeing today is that these actors operate through big, interconnected networks that stretch across different digital platforms. It’s not just a few social media accounts pushing false stories; it’s entire systems built to manipulate information, confuse the public, and weaken trust in democratic institutions.
According to the EU’s own assessments (EEAS, 2025), many of the most active FIMI actors are linked to states – especially Russia and China and to a lesser extent Iran. They often use covert channels and hidden online ecosystems. What we see publicly is just the surface; underneath, there are large, coordinated operations designed to influence people in Europe, as well as beyond.
But also non‑state actors play an increasingly important role in Foreign Information Manipulation and Interference (FIMI) and they make the information environment more confusing and unpredictable. Unlike state actors, who act on behalf of a government and usually have long‑term strategic goals, non‑state actors can be almost anyone (“useful idiots”). They include influencers and online personalities, antistate-oriented groups, criminal or activist networks, diaspora groups, or even ordinary individuals who deliberately spread misleading content. Some of them act independently, while others end up amplifying messages that originally come from a foreign state – sometimes intentionally, sometimes without realizing it.
Their motivations are usually much simpler than those of states. Some want to make money, some want attention, and others are driven by ideology or a desire to create controversy. Even though their goals may be narrow, and their resources more limited, they can still have a big impact because they move quickly and know how to tap into ongoing debates, emotions, and social tensions within the target state and even local community. We also know that sometimes states use non-state actors (as a proxy) deliberately as part of their operandi modus .
Which digital tools and techniques are most central to contemporary FIMI operations, and how quickly are theses evolving with the rise of new technologies such as AI?
Today’s FIMI operations rely on a mix of digital tools and techniques. Actors use AI to create fake videos, images, and posts that look very real, making it easier to influence public opinion or stir up conflict. We did talk about some of these in the webinar we had, i.e. fake videos of President Zelensky telling people in Ukraine to lay down arms, the Dobbelganger case, where fake webpages which look very alike and with very similar web-adress to the official pages. (i.e. NATO’s webpage, The Guardians webpage).
They also use bot networks to push these messages so they trend quickly and look more popular than they actually are. This has been seen in campaigns targeting elections and major public events across many countries (as we did see and talk about during the webinar – in prior to the Swedish election in 2018, the Romanian election in 2024, Paris Olympics 2024, to mention a few)
Through hidden networks of websites, state-linked media, and proxy groups that reinforce each other’s messages, the campaign becomes even harder to detect.
Use of humour and memes, and content triggering strong emotions, is also effective in terms of getting the content spreading even further.
We also know that there have been incidences where the campaign has simply had the intention of overloading fact-checkers (Operation Overload).
AI is a gift from heaven for influence operations.The tools are getting faster, cheaper, and more powerful, letting both state and non-state actors spread manipulative content more easily than ever before. Moreover, some of the false content might even be infiltrated into the AI-tool, making it even more difficult to separate false from true.
Which countermeasures have proven most effective so far, and where do you see the biggest gaps in our current defenses against FIMI?
The most effective counter-measure against FIMI is situational awareness. The fact that national and local governments, first responders, businesses and companies, civil society, including you and me, know about FIMI and various techniques and the intention behind them. Stop, think, (before you) share is an important rule. This means that countering FIMI demands a whole of society-approach!
Cooperation and exercises between countries, various actors and agencies are very important in that regard, and I applaud the good initiative that has been taken in this regard with establishing the High North Civil Preparedness Forum. Awareness around vulnerabilities within the various societies and countries in terms of potential cleavages and how they can be exacerbated during major events, crisis or elections, but also in daily discourse is important.
In addition to awareness and knowledge, we also need to regulate the platforms in terms of the content they allow. Policies such as the Digital Services Act, the European Democracy Action Plan, and the European Media Freedom Act have all strengthened the EU’s ability to demand transparency from platforms and limit the spread of harmful content. However, there is a risk that monitoring and moderating platforms also come at the cost of free speech and the liberal values and democracy that we are trying to protect, an argument that the Trump administration (and the big tech companies behind), has used to put pressure on the EU and the Digital Service Act.
Although the EU has become much better in exposing FIMI, and has taken important initiatives towards changing the regulations and laws, there are still big gaps, and the most important one is that the development of AI is happening at a speed that makes it difficult to follow. This means that we are still too reactive and that attribution is getting harder because tactics like covert networks and AI tools hide who’s behind an attack.
Secondly, there is a risk for focusing too much on Russia, meaning that other actors that potentially have an interest in targeting our democracies and political trust, are overlooked. This includes China, but also others. The recent uncertainties with regards to the transatlantic relationship should be mentioned in this regard.
Europe\s dual dependence on American platforms and Chinese hardware is a huge vulnerability.
Related information
Webinar series on the theme of countering foreign information influence
About High North Civil Preparedness Forum
The High North Civil Preparedness Forum is a three-year, EU-funded, Interreg Aurora project lead by the High North Cooperation. The project aims to strengthen interregional collaboration within civil preparedness in northern Finland, Sweden and Norway. The target groups for the project’s activities are key civil preparedness stakeholders in the in the region, such as municipalities, agencies, and volunteer organisations.