Since social networks such as Facebook and Twitter have become powerful platforms attracting billions of users, cybercriminals have come to view them as platforms to further their own interests. Thus the development of social networks has been followed by the development of « social bots. » Some social bots form « social botnets » designed to misinform and manipulate on a larger scale by building on the foundations of traditional bots and botnets.

Social bots

A social bot is linked to an account on a social networkFacebook, Twitter, etc.unlike a traditional bot which is linked to an infected machine and its IP address. Social bots are designed to attempt to pass the Turing test and sophisticated enough to dupe Internet users into mistaking them for human beings. They are capable of building relevant references through intelligent and selective access to knowledge on general topics and current events. This allows them to work convincingly by acting « human » and delivering coherent messages at passably irregular intervals. Such bots may help spread rumours, false accusations or unverified information, especially by automatically retweeting. To do this, they use artificial intelligence. Some are able to gain access to knowledge on general topics and current events to find relevant references.

They give the impression of being trustworthy people rather than programmed processes, and so they are a serious alarm signal as regards cybercrime. As they are based on the concepts of botnets, they are capable of forming powerful social botnets able to infiltrate social networks and gain users’ trust over time.

Hence OSNs are threatened by social bots linked to accounts whose behaviour imitates human users with malicious intent. Thus a social botnet is linked to a group of social bots under the control of a bot herder which collaborate for malicious purposes and imitate the interactions of legitimate users to reduce the risk of individual detection. Furthermore, J. Zhang et al. (2016) showed that the use of a social botnet is more advantageous and more effective to distribute spam and conduct operations of influence, with potentially harmful effects.

Social bots at work

Some social bots have already been used to infiltrate political discourse, steal personal data, spread false information and manipulate the stock market. It must be emphasised that they are capable of conducting cyberattacks by posing security problems and exerting influence by mounting operations against political opposition groups or dissidents, listed companies, etc. According to certain assessments, more than half of Twitter account holders are actually bots, and more extensive manoeuvres to exert influence are to be expected. Some might even be capable of threatening the foundations of democracy, for example by inciting hatred and violence, initiating serious or large-scale problems or influencing elections.

According to B. Solis (2012), online digital influence is defined as « the ability to cause effect, change behavior. » A framework may then be established based on three axes:

  • Reach, linked to popularity, proximity and goodwill.
  • Resonance, linked to signal frequency, period and amplitude.
  • Relevance, linked to authority, trust and affinity.

Social bots could thus establish varied and effective strategies by modulating these criteria.

Infiltration strategies of social bots

According to Y. Boshmaf et al. (2013), social bots are able to infiltrate Facebook in part because more than 20% of its users randomly accept requests from new friends and more than 60% of its users accept these requests from accounts with at least one contact. In addition, J. Zhang et al. (2016) showed that simple automated mechanisms may successfully infiltrate social networks such as Twitter. CA Freitas et al. (2015) showed that they are capable of producing credible content and adopting good infiltration and influence strategies.

As social bots are based on emotions and contagion, some would be capable of touching the collective unconscious by infiltrating populations via manoeuvres intended to affect their perception or manipulation in order to destabilise a group, society or state.

Detecting social bots
One of the major challenges in detecting social bots is that they are becoming increasingly sophisticated and blurring the behavioural boundary between bots and humans. They are already capable of finding information on the Internet and in databases to emulate human behaviour — activities, profiles, messages, live chat, etc. — and in particular of researching influential people and interesting content using popularity and keyword indexes. The scientific community is committed to automatically detecting them or at least distinguishing a human from a bot using various approaches seen in the scientific literature.

Attacks involving social botnets

Certain features of malicious code (e.g. Trojan horses) use social networks not only to manipulate but also to infect systems and accounts belonging to other users by means of malicious links that redirect to an infected profile. This trend is facilitated by the fact that humans trust third parties declaring themselves to be « friends » of people in their social circles. When widespread it represents a major weakness of social networks where users will be led to participate, in spite of themselves, in unlawful activities by serving as relayers. Social botnets thus formed could then be used to automate the spread of malicious links and expand the reach of attacks. These novel attacks include the hijacking of hashtags and variants thereof as well as invasion via reposts and retweets.

Hijacking a hashtag targets an organisation or a group by misappropriating its hashtag so as to involve it in distributing spam and malicious links in its circles and then allow targeted cyberattacks to be launched on behalf of or against it. Trend hijacking is a variant where hashtags are used to allow bots to direct the attack against as many victims as possible by selecting the hottest trends of the moment in order to better spread the attack and involve potential victims who, unbeknownst to them, are at once drawn to and pitted against one another. Another variant consists of posting as many links as possible in hopes of getting just a few clicks on each one. This practice toys with the limits of the terms of use of social networks. While hijacking a company’s hashtag puts the company at a massive organisational risk, posting these links also aims to publicise distorted or defaced websites to seriously damage the reputation and sully the image of that company.

Concerning invasion via reposts and retweets, malicious activity linked to a botnet is established when a message is immediately reposted or retweeted by thousands of other accounts or more. While supervision or reporting is used to flag and delete the source account, an invasion may persist in spite of every effort to curb it due to large numbers of reposts and retweets.

Unlike traditional botnets, social botnets are not directly involved in DDoS attacks, but nevertheless are used in control and command (C2) devices to enhance the coordination of these attacks through better instructions concerning timing, domains, target IP addresses, logins, etc. This is a « lucrative business » where the highest bidder obtains access, for a given period of time, prior to a change in affiliate. While for the latter social botnets provide a real added value, for the manager they involve full-time work to establish and maintain botnets. For both they entail acts designated as organised crime.

Conclusion

The authorities must assess such phenomena and their consequences in order to implement effective countermeasures in partnership with public and private institutions, including international organisations. In this regard, Facebook, Twitter, Google and other social networks are vectors for content and links and their responsibility has yet to be clarified. For their part, ordinary citizens and those who represent them must be aware that they have an important role to play by acting in less automatic, less immediate and more thoughtful ways. This prevention requires the awakening of appropriate cultural awareness.

References

Boshmaf Y. et al. (2013): Design and analysis of a social botnet. Comp. Networks, vol. 57, no. 2, pp. 556–578.

Freitas CA et al. (2015): Reverse Engineering Socialbot Infiltration Strategies in Twitter. Proc. 2015 IEEE/ACM Intern. Conference on Advances in Social Networks Analysis and Mining.

Guinier D. (1991): Computer « virus » identification by neural networks: An artificial intelligence connectionist implementation naturally made to work with fuzzy information. ACM SIGSAC Review, vol. 9, no. 4, pp. 49-59.

Guinier D. (2004): L’intelligence artificielle dans les systèmes de décision (Artificial Intelligence in Decision-Making Systems). Expertises, no. 284, Aug.-Sep., pp. 295-299.

Guinier D. (2016): Réseaux et bots sociaux : du meilleur attendu au pire à craindre (Social Networks and Bots: From the Best to Be Expected to the Worst to Be Feared). Expertises des systèmes d’information, no. 417, Oct., pp. 331-335.

Guinier D. (2017): Réseaux et bots sociaux : fondements et risques émergents (Social Networks and Bots: Foundations and Emerging Risks). To be published in La revue du GRASCO (Research Group on Actions against Organised Crime), Doctrine Sciences criminelles.

Solis B. (2012): The rise of digital influence, Altimeter Group, Research Report, March.

Turing A. (1950): Computing Machinery and Intelligence, Mind, Oxford Uni. Press, vol. 59, no. 236, Oct., pp. 433-460.

Zhang J. et al. (2016): The Rise of Social Botnets: Attacks and Countermeasures. Cornell University Library, 14 p.

 

Daniel Guinier is Doctor of Science, Certified Information Systems Security Professional (CISSP), Information Systems Security Management Professional (ISSMP) and Information Systems Security Architecture Professional (ISSAP) and Member of the Business Continuity Institute (MBCI) in continuity and crisis management, Expert in cybercrime and financial crimes for the International Criminal Court in The Hague, and Colonel (RC) of the French gendarmerie.

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.