(by Guillaume Tissier, President of CEIS)

Humans are unable to single-handedly deal with cyber attacks. This is the view shared by cybersecurity professionals. There are several reasons to this: the volume of attacks, their constant mutations, the speed of reaction they require, and the lack of expertise on the market. Artificial intelligence (AI), already widely used in fraud prevention, increasingly appears to be a major « game changer » in cybersecurity, in particular in defensive cyber warfare. In fact, the recent Villani report mentions defence as one of the 4 strategic outcomes of artificial intelligence. Conversely, the generalisation of artificial intelligence, including in entirely automated weapon systems, will soon raise issues related to cybersecurity, as this technology could be used for malicious purposes.

 

What role for AI in cybersecurity?

In just a few years, artificial intelligence has become a marketing buzzword among cybersecurity publishers. This strategy has been successful since one third of companies state they are using AI-based security solutions.[1]. However, this figure cannot hide very diverse realities, with some solutions relying more on engines made of sophisticated rules than on real AI features. Indeed, to refer to artificial intelligence, there needs to be 1) a capacity for environment perception through training, whether supervised or not; 2) a capacity for analysis and problem solving; 3) a capacity for action proposal, or even autonomous decision-making.

 

Theoretically, AI could greatly contribute to cybersecurity, in terms of prevention, anticipation, detection or reaction. Practically speaking, vulnerability or internal/external threat detection is one of the most mature uses of AI. Action is needed quickly since the existing detection systems based on signatures are showing their limits: high number of false positives; incapacity to adapt to the latest threats, especially APTs; and cumbersome signature databases, which impact performances. Various actors such as iTrust[2] in France (Reveelium solution), Darktrace[3] in the United Kingdom, or Cylance[4] (American company that has recently opened a branch in France[5]) specialised in developing AI-based solutions for anomaly detection and behaviour analysis. For their part, most of network and endpoint security solutions companies (Symantec, Sophos, F-secure, SentinelOne, Fortinet, Palo Alto Networks…) have integrated more or less advanced AI building blocks in their solutions, sometimes by buying small specialised actors (such as Invicea, bought by Sophos, or RedOwl, bought by Forcepoint in 2017).

 

After detection, incident response is also largely impacted by this trend. Indeed, the idea is to multiply the efficacy of SOCs and CSIRTs by granting ever increasing intelligence to SIEMs. Splunk[6] announced a few days ago the acquisition of Phantom Cyber[7], an expert in automation and incident response orchestration. IBM, for its part, incorporated its Watson module into Qradar and is now offering a « cognitive security » option[8] enabling to process in a combined way both structured data (logs for instance) and unstructured data (experts’ opinions, social networks…).

 

To these various defensive uses, we should also add the possibility to use AI to authenticate the user from a footprint established through the analysis of their own behaviour (see DARPA’s Active Authentication programme[9]).

 

Main uses of artificial intelligence in cybersecurity

 

Use Description Maturity
Prevention Code’s Safety Programming support, automatic bug fixing

 

 

˜˜™™

Cyber-resilience Auto-adaptive systems able to automatically reconfigure themselves when attacked  

˜™™™

Anticipation Cyber Threat Intelligence

 

Data leak prevention; analysis and characterisation of past attacks; monitoring of potential attackers.  

˜˜™™

Identification of attacks Identification of the perpetrators of a cyber attack  

˜™™™

Cognitive cybersecurity Aggregation and treatment of a set of unstructured data (experts’ opinions, social networks…) and structured data (logs) in order to support the security teams.  

˜™™™

Detection Vulnerability detection Automated penetration test; attack simulation; flaw detection in software.  

˜˜˜™

Internal or external threat detection Anomaly detection from behaviour analysis; anti-APT; log analysis; fraud prevention.  

˜˜˜™

Reaction Incident response Automation and incident response orchestration (incident analysis; implementation of counter-measures; content filtering; evidence collection…)  

˜˜™™

Identification of attacks Identification of the perpetrators of a cyber attack  

˜™™™

 

Beyond the technical layers of the cyberspace, artificial intelligence can finally play an ambivalent role on the semantic layer since it allows the creation of fake news at an industrial level – as demonstrated by the fake speech of Barack Obama produced by the University of Washington[10] – while facilitating their detection. DARPA just launched a forensic media programme to certify information[11].

 

Artificial intelligence will therefore progressively permeate all cybersecurity technologies and processes. A good example of this is the Cyber Grand Challenge, organised by DARPA during DEFCON 2016, where various AI competed to detect and correct flaws in an entirely automated way[12].

 

If using technology for cyber defence purposes seems then promising, its limitations should also be taken into account as they are less technological than human (understanding AI) and psychological (accepting AI). Are we ready to let machines make decisions that may have severe implications? If behaviour detection or code analysis do not seem to be an issue, content filtering, IP address blocking, and especially attack identification are decisions that involve « commitment ». Generally speaking, artificial intelligence cannot replace human intelligence. Its mission is mostly to enhance it. This implies that technology is not a black box: users must be able to follow and understand the various reasoning stages and understand the decision. This is the essential prerequisite for the trust that they may or may not put in the system. There is a real risk of seeing AI battles seeking to weaken each other and to deceive opposing machines. During DEFCON 2017, researchers have thus demonstrated that is was possible to use the ‘Open AI’ framework to create completely undetectable malware[13].

 

What cybersecurity for AI?

The connection between AI and cybersecurity has therefore another side – a negative one – linked to the misuse of technology which can be the victim of embezzlement and attacks. It can firstly be attacks by cache poisoning which consist in injecting biased or poor quality data during the training phase. Tay, Microsoft’s chatbot (or conversational robot) was one of the victims[14]… Another method: attacks by inference, which consist in forcing AIs to disclose their internal operation (thresholds, rules…) by using various scenarios. This method is already widely used by cybercriminals to deceive fraud prevention systems. Finally, it is possible to deceive AI systems by slightly modifying their environment, as Google researchers recently demonstrated with image recognition[15]. These weaknesses have led Adi Shamir, co-inventor of the RSA algorithm, that we should at all cost avoid asking an AI system how to save the internet. The risk is indeed high that it first recommends to kill the network in order to better save it…

 

At the military level, these risks are all the more worrisome in that AI will soon be omnipresent in weapon systems, which some countries envision to be largely autonomous in the near future. While the United States primarily conceives AI as a mean to increase human performances, both at the physical and cognitive levels, Russia is working on the full automation of some platforms. The aim is to robotise 30% of military equipment by 2025 in order to progressively exclude human beings from the frontline. In this global competition, China has not been left behind and is now seeking to use civil technologies as a lever for its military capabilities with the ambition to become a world leader by 2030.

 

Artificial intelligence has therefore become a major sovereignty issue. In view of the proactive approach of its Russian, American and Chinese competitors, France has a card to play, both scientifically and with regards to available data and industrial outcomes. It is therefore a matter of creating the conditions for the security of artificial intelligence and trust in this technology. Firstly, by investing in artificial intelligence security . This is what the Villani report recommends by suggesting to entrust the ANSSI (National Cybersecurity Agency of France) with a mission on that topic. Secondly, by defining an ethical framework. How, for instance, can we reconcile the right to oblivion and data protection when faced with systems that goggle up and memorise billions of data? Thirdly and finally, by focusing efforts on a few sectors in which France is a leader. Cybersecurity is definitely one of those.

 

What strategy for France?

During the ‘AI for Humanity’ conference held at the Collège de France, the French President presented France’s ambitions and strategies related to AI[16].

Four priorities have been defined:

– To strengthen the AI ecosystem to attract the best talents;

– To develop a policy for opening up data;

– To create a regulatory and financial framework in favour of the emergence of AI champions;

– To initiate a discussion on AI regulation and ethics.

 

 

 

 

[1] http://www.esg-global.com/blog/artificial-intelligence-and-cybersecurity-the-real-deal

[2] https://www.itrust.fr/

[3] https://www.darktrace.fr

[4] https://www.cylance.com

[5] http://www.globalsecuritymag.fr/Florent-Embarek-Cylance-l-IA,20180605,79022.html

[6] https://www.splunk.com/fr_fr

[7] https://www.phantom.us/

[8] https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=SEW03134FRFR

[9] https://www.darpa.mil/program/active-authentication

[10] https://www.sciencesetavenir.fr/high-tech/le-vrai-obama-prononce-un-faux-discours-un-trucage-criant-de-verite_114855

[11] https://www.darpa.mil/program/media-forensics

[12] https://en.wikipedia.org/wiki/2016_Cyber_Grand_Challenge

[13] https://www.silicon.fr/machine-learning-creer-malwares-furtifs-181669.html/?inf_by=5a1c1c8b671db8013f8b4a8c

[14] https://www.lemonde.fr/pixels/article/2016/03/24/a-peine-lancee-une-intelligence-artificielle-de-microsoft-derape-sur-twitter_4889661_4408996.html

[15] http://www.ladn.eu/tech-a-suivre/hello-open-world/des-pirates-ont-reussi-a-hacker-lia-via-les-attaques-adversarial/

[16] https://www.gouvernement.fr/argumentaire/intelligence-artificielle-faire-de-la-france-un-leader

Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.
Stay tuned in real time
Subscribe to
the newsletter
By providing your email address you agree to receive the Incyber newsletter and you have read our privacy policy. You can unsubscribe at any time by clicking on the unsubscribe link in all our emails.