- COMP.SEC.100
- 8. Adversarial Behaviour
- 8.2 Identifying harmful activity
Identifying harmful activity¶
An ordinary user is bound to encounter some form of harmful activity in the digital world sooner or later. This may include, for example, a scam or fraud that meets the legal criteria for a criminal offence, targeted harassment on social media, or information influence. It is useful to be aware of different forms of harmful activity and of how, for instance, scams can be recognised and how they can be avoided. The Finnish Police maintains a comprehensive site on different kinds of frauds. You will also find information on how to act if you yourself, or someone close to you, becomes a victim of a scam.
Social engineering and its methods are connected to many kinds of harmful activity. They have been discussed in the course materials here. The latter part of this section focuses on information influence and disinformation.
Identifying information influence¶
Information influence is cost-effective for its practitioners, and in online environments it can be carried out easily and quickly, regardless of time and place. Contemporary information influence is closely tied to technology, and it is important to understand how it is implemented. Some of the underlying techniques include:
- Tracking. Websites, social media platforms and other systems that recommend content collect information and data about their users.
- Recommendation. Collected data and content recommendation are used not only for marketing, but also for political or social purposes.
- Deep learning enables, for example, facial or speech recognition.
- Reinforcement learning. Systems can learn autonomously through trial and error, such as driving a car or playing games. The same methods can be used to enhance information influence.
- Attention engineering. The content offered by social media platforms may be personally tailored and does not necessarily represent so-called public opinion, or even the same content shown to other users. This also includes the deliberate alternation of content intended to irritate and content intended to please the user.
Identifying information influence is not always straightforward. Nowadays, all kinds of information are abundantly available from many sources, and it is not always easy to distinguish what information is reliable. Marketing communications and even this learning material are also a form of influencing through information, but information influence is usually understood as the dissemination of disinformation. Disinformation is deliberately false information, a modified version of the truth, or an “alternative” truth, as one infamous figure has put it. (Information that is merely incorrect is sometimes referred to as *mis*information.) Below are listed characteristics of disinformation:
- Headlines are misleading, even if the information and claims in the text itself are correct.
- Claims or information are not justified or no sources are provided.
- A quotation, source or context is changed; for example, a quotation may be accurate but originally presented in a different context.
- Context is entirely invented.
- Claims and information are selectively chosen to make the matter appear entirely different or to fit a narrative that serves one’s own purposes.
- Sources are concealed or generalised; for example, it is claimed that “researchers at Harvard” hold a certain opinion.
- Images or charts are distorted or fabricated.
- A whole is inferred or constructed on the basis of a small part; for example, a single employee’s opinion is claimed to represent the view of an entire organisation.
- Opinions are presented as facts.
- It is claimed that something represents the opinion of the general public.
- Exaggeration and overgeneralisation create drama and polarise the issue at hand.
- Language is used that is loaded and evokes strong emotions. This may be an attempt, for example, to obscure the true significance of the matter. Appealing to emotions also makes critical thinking more difficult for the reader. In addition, emotionally charged messages tend to spread more efficiently than factual messages on social media platforms.
- Accurate information or claims are refuted or downplayed using false information or claims.
- Other participants in a discussion are belittled, mocked, or attacked personally. The aim is to reduce the weight of the other party’s words.
- An individual or institution is accused or blamed for something that the accuser is also guilty of.
- Speculation is used and false comparisons are presented; for example, unrelated issues are equated and claimed to be an analogy for the matter under discussion.
- Disinformation spreaders are used as representatives of the “opposing” opinion, for example in debates. This may also be unintentional, but it creates a false impression of the prevalence of the opposing view.
- An artificial and polarised “us versus them” setting is created.
- Coordinated use is made of, for example, complementary websites to support the spread of misleading information, with the aim of offering additional “sources” to those checking facts, for instance for a fake news story.
- Conspiracy theories are used or developed with the aim of reinforcing rumours and arousing suspicion towards a particular party. They may also be used to invalidate real issues and reduce their perceived importance by labelling them as conspiracy theories.
Social media often functions as a channel for disinformation. One reason for this is, for example, that sharp, emotionally appealing messages attract a great deal of attention and spread efficiently on social media, both through likes and through messages that seek to criticise the content. Below are guidelines for identifying information influence on social media:
- Search online for information about the claim from other sources. Is it widely reported? If not, it may be information that has not been verified.
- Check who the sender of the message is. Examine the profile: is it new, and what kind of posting history does it have? For example, bot profiles often post around the clock from different parts of the globe and repeatedly share sharp, politically charged comments.
- Perform a reverse image search on the profile picture. Publicly available images may be a warning sign.
- Investigate whether the profile has other social media accounts. Do they provide confirmation of reliability, or do they raise further suspicions?
- What is the content of the message? Warning signs may include, for example, messages that seem “too good to be true”. If the message contains images, you can try to search for them using reverse image search. Many information influence messages use old images that have been attached to a misleading new context. If the image shows a clearly identifiable location, map searches can also be used when considering the image’s authenticity.
Deepfake videos created using artificial intelligence have rapidly become a growing and increasingly convincing source of disinformation. Their number has multiplied in just a few years (by 2025, do you believe this source?), and high-quality deepfake video and audio are often difficult to distinguish from the real thing—both for humans and for automated systems.