AI Hackers and Systemic Risk

In the financial sector over the past few years we’ve been reaping the benefits of applying AI techniques such as machine learning / deep learning and natural language processing (NLP) to a wide range of problems – including contract data management, of course. We also need to be aware of the potential misuse of these tools.

In his recently published paper The Coming AI Hackers, Security guru Bruce Schneier takes a look at how artificial intelligence will be applied to hacking. This means hacking everything from IT systems to economic, social and political systems – and how AI systems will first be used to hack us, and then will themselves become the hackers. The full paper is around 50 pages long but is very readable (and well worth the time). Given the work going on at the moment to develop standards that could lead to ‘smart’ derivatives contracts (containing executable code), some of the examples he gives in the paper seem relevant.

We’re asked to imagine feeding national or even global tax laws into an AI system. The tax law consists of formulas for calculation of the tax due, but there is often ambiguity and this makes codification problematic – giving us a basic level of defence against the AI systems (and, as he comments in the paper, guaranteeing that “there will be full employment for tax lawyers for the foreseeable future”!). 

However, when applied to areas where the rules are “designed to be algorithmically tractable” (he gives the example of the financial system), providing the AI system with all the relevant information and giving it a goal of “maximum profit legally” will result in new hacks, some of which are likely to be beyond human comprehension – we won’t even realise they are happening.

This might sound like science fiction, but with rapidly advancing technology it could become a reality “within the next decade”. The regulators and policy makers responsible for mitigating systemic risk to the financial system certainly won’t be short of work either.