Like so many concepts in the cyber realm, Artificial Intelligence is neither new – nor uniformly defined. The idea that a machine could replace a human is known from antiquity. The creation of ‘machines that think’ was the end-goal motivating research in the 1950s. The first AI program was started in 1955 at Carnegie Mellon University, an AI conference followed in 1956 and by 1959 MIT had established an AI lab. Distinct from machine learning (which trains algorithms on example data), AI tells us what to learn, where and why. Many of the so-called AI solutions offered by cybersecurity providers are intact machine learning; the field of AI in cyber remains relatively nascent.
The evolution of AI can be thought of in broadly three phases:
- machines that can solve logic problems (such as winning a chess match)
- algorithms that enable recognition and situational awareness (such as facial recognition systems used on metros and by traffic police in e.g. Beijing)
- algorithms that use knowledge to enable contextual adaptation
Today we identify two broad types of AI:
- ‘narrow’ AI for solving a specific problem (such as in a game of chess)
- ‘general’ AI for solving complex, cross-domain problems (such as in natural language processing)
Most of today’s AI solutions and technologies are in Phase 2: recognition and situational awareness. The third phase – understanding why – essentially enables a machine to understand context, make choices and decide if a solution may (or may not) work.
Is cybersecurity getting smarter, faster?
AI has not evolved concurrently with cybersecurity and over the past decade, researchers have argued for new and specific AI techniques to be developed for cybersecurity applications. To be successful, many claim that AI has to be focused on human thought-patterns, reason and values: this adds a subjective dimension to what has until now been a uniformly binary domain of secure / not secure.
Today, AI is used in detecting attack patterns across a defined network (such as within a company intranet or communications network). Companies such as ZoneFoxAI, Interset, Darktrace and Paladion regularly show their AI wares at cybersecurity expos. Cybersecurity providers offer a combination of human and machine solutions to create an intelligent system that can spot a threat before it becomes a breach. Intelligent systems have been proven within the last decade((Chen, J. Y. & Barnes, M. J. (2013), Human-Agent Teaming for Multi-Robot Control: A Literature Review. US Army Research Laboratory, Aberdeen, MD.)), commercial solutions that enable situational awareness for network admins((Karaarslan, H. (2017), A Cyber Situational Awareness Model for Network Administrators. Naval Postgraduate School, Monterey, CA.)) are fast emerging.
Applications of AI in cyber defence applications are proven for e.g. insider threat detection, human behaviour modelling in real-time — albeit limited to closed environments (e.g. an intranet)((McGough, A. S. et al. Insider Threats: Identifying Anomalous Human Behaviour in Heterogeneous Systems Using Beneficial Intelligent Software (Ben-ware). in 1–12 (ACM Press, 2015). doi:10.1145/2808783.2808785)) and reliant on human input to verify the diagnosis. Such hybrid human-machine solutions need a visual interface – for all our ability to comprehend metadata, we humans rely on ‘seeing’ to aid our believing. Unsupervised detections((Tuor, A., Kaplan, S., Hutchinson, B., Nichols, N. & Robinson, S. (2017), Deep Learning for Unsupervised Insider Threat Detection in Structured Cybersecurity Data Streams. ArXiv171000811 Cs Stat.)) are the next goal; ultimately, we seek to delegate our collective security to the machines.
The future cyber environment
Today’s cybersecurity actions are done by legitimate parties (companies, cybersecurity professionals) interacting with ‘bad actors’ (bots, real people), with both sides battling for control over as many machines as they can command. Imagining the future beyond a time horizon of a few years is a dangerous task. While Ray Kurzweil and others skilled in his art may have accurately predicted the invention of eyeglasses that project images onto a user’s retina, or virtual assistants on consumer laptops, these exciting leaps in technological ability are just one dimension of a messy human world. Conceptually, the cyber realm can be thought of as an additional plane that potentially augments and accelerates the scope and reach of already compound risks, such as infectious disease, collapse of financial markets or failure of food supply chains.
(Bruce Schneier, Chief Technology Officer of Resilient Systems, speaking at InfoSec2016)
What AI does is to take us beyond the convergence of the real and the virtual toward a world where machines understand and can manipulate the real, the virtual and the emotional. The future cyber environment therefore incorporates not only sensors of the physical world, the IoT, but also sensors of emotion and intent. Beyond attacks on data and communications, threat actors will likely exploit feelings and emotions to achieve new forms of ransomware and other cybercrime.
Is AI the next cutting edge in cyber defence?
As with many new inventions, the more prevalent our experiments, the more likely we are to see accidents and problems. AI is no exception: in the world of cyberphysical systems, the frequency and impact of AI failures is likely to increase.
Cybersecurity has evolved thus far on a separate track from AI and it now seems likely they will converge. Part of the reason for this is volume of data and devices: people just are not capable of confidently securing everything, everywhere, 100% of the time. Humans are fallible. This new reality calls for new technology; some argue that AI is the answer. For example, can the success of AI for identifying insider threat be used to teach positive, security-aware behaviours across an entire workforce?
Risks and opportunities
AI is apparently limitless in its scope and reach. As with all artificial creations, experiment comes with great risk attached. The internet itself is proof that a once-ideal concept for enabling pervasive data-sharing in the pursuit of better human knowledge can be manipulated in ways unimagined at its inception. Unlike the internet, AI takes the machine world much, much closer to contact with real-time human thought and emotion. For this reason, AI has been identified as among the most dangerous of man-made risks.
In 2017, an AI conference at Asilomar, California, agreed among the research professionals present a set of 23 general AI Principles, which read a lot like the ideal of the internet in its early form((Larson, C. If Artificial Intelligence Asks Questions, Will Nature Answer? Preserving Free Will in a Recursive Self-Improving Cyber-Secure Quantum Computing World. Cosm. Hist. J. Nat. Soc. Philos. 14, 71–82 (2018).)) — and pose even greater problems: who should decide what is ethical AI, or universally-beneficial AI? As a race, our collective attempts to ensure a better world for all are as-yet unsuccessful: for example, the 1948 Universal Declaration on Human Rights, while adopted by the UN General Assembly, is as-yet imperfectly recognized. Humans still compete across a geopolitical terrain that is real, resource-constrained and dependent on human action. Forty years ago, a pioneer in Artificial Intelligence mused that the wise would seek to understand the easy problems first: now as then, ‘when we have solved a few more [low-level problems], the questions that arise in studying the deeper ones will be clearer to us.’((Marr, D. (1977), Artificial intelligence—A personal view. Artificial Intelligence 9, 37–48.))
For cybersecurity, the advent of AI for securing the digital realm apparently changes everything – yet it also changes very little. Cyber attacks will continue whether the defence is human, human/machine or machine. AI is both the threat and (many would argue) the solution to better cybersecurity: the larger problem is how to use it wisely.