Where Do We Go From Here?

The recent win by Google’s AlphaGo computer program in a 5-game Go tournament against the world’s top player for over a decade, Lee Sedol made headlines around the world.

And once you look past some of the more superficial tabloid predictions of imminent robot enslavement, you’ll find a number of intelligent and fascinating accounts detailing exactly why the event represents something of a technology landmark.

It’s worth digging into Google’s blog post for the background. Because this was not just another case of a computer learning how to win a board game. Nor was it a resumption of competition between man and machine following our previous defeats in chess (against Kasparov) and in Jeopardy (by Watson).

Complex Choices

Instead, the choice of game here is significant. Go is an ancient game with more possible legal board positions than there are number of atoms in the universe. In fact, we’ve only managed to calculate that number in 2016 after some 2,500 years. Why is this important? Because it means that a computer cannot possibly find the best options simply by brute-force guessing combinations. Building a system to index all possible moves in the game and then rely on the computer to look up the best move each time is simply not possible.

Instead, a successful Go player needs to use something that we can best understand as intuition. A human has to be able to act on no more than a feeling that one move is better than another – something that it was generally accepted that this was something that computers couldn’t do.

Turns out general opinion was wrong.

Self-Taught

By ‘simply’ learning 30 million possible moves played by human experts, the program showed that it could predict which move a human would make 57% of the time. But this would only go so far. To win, the AlphaGo algorithm needed to learn new strategies – by itself.

And it’s here that the outcome was stunning. During the games (live streamed online to massive audiences), the computer made certain moves that made no sense to Go experts. And yet (for the most part) they worked. As one commentator mentioned, this was, at some level, an alien intelligence learning to play the game by itself. And as another put it:

“..as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”

When it comes to AI, it’s particularly important to reign in the hyperbole. Playing Go in a way that’s unrecognisable to humans at times is hardly Skynet. But it’s fascinating to think that the program reached a level of expertise that surpassed the best human player in a way that no one really fully understands. You can’t point to where it’s better because the program teaches itself to improve incrementally as a consequence of billions of tiny adjustments made automatically.

Neural Networks: Patience Pays Off

The success of computer over man came from a combination of different, but complementary, forms of AI – not least of which were Neural Networks. After reading a little about the godfather of Deep LearningGeoff Hinton, and listening to an another excellent podcast from Andressen Horowitz, it turns out that the approach of using Neural Networks (at the heart of AlphaGo) was an A.I. method that was ridiculed as a failure for a number of years by fellow scientists, particularly in the 1980’s.

It Turns out that the concept was just been too far ahead of its time. As Chris Dixon points out in ‘What’s Next In Computing?‘, every significant new technology has a gestation period. But that often doesn’t sit easy when the hype cycle is pointing towards success being just around the corner. And as the bubble bursts, the impact of the delays on the progress of innovation are usually negative.

Nowhere has that been seen so clearly as within the field of Artificial Intelligence. Indeed, the promise has exceeded the reality so often that it has its own phrase in the industry – AI Winters – where both funding and interest fall off a cliff. Turns out that some complex things are, well, complex (as well as highly dependent on other pieces of the ecosystem to fall into place). So in the UK, the Lighthill Report in 1974 criticised the utter failure of AI to achieve its grandiose objectives, leading to university funding being slashed and restricting work to a few key centres (including my home city, Edinburgh).

Expert Systems: Data Triumphs

Thankfully, the work did continue with a few believers such as Hinton however. And whilst the evolution of AI research and progress is far outside this blog post, it’s interesting to see how things evolved. At one stage, Expert Systems were seen as the future (check out this talk by Richard Susskind for how this applied in the context of legal systems).

To simplify, this is a method by which you find a highly knowledgeable human in a specific field, ask them as many questions as possible, compile the answers into a decision tree and then hope that the computer is able to generate a similar result to that expert when you ask it a question. Only problem is that it turns out that this doesn’t really work too well in practice.

But thankfully, those other missing pieces of the ecosystem are now falling into place. With massive computation, bandwith and memory available at extremely low cost these days, those barriers have now fallen. Which has led to the evolution of Neural Networks from a theoretical, heavily criticised approach into something altogether far more respected and valuable.

Welcome to self-learning algorithms – algorithms that (in this case) teach themselves how to play Go better – but without asking a Go expert.

Neural Networks aren’t new in any way. They started as a mathematical theory of the brain but didn’t make much progress for 40 years. But with the barriers gone, we’re now seeing neural networks being piled on top of each other. And AI is improving significantly not because the algorithms themselves are getting better. It’s improving because we’re now able to push increasing volumes of data into models which can in turn use this data to build out a better model of what the answer should be.

Learning By Intuition & Iteration

Instead of trying to capture and codify all existing knowledge, deep learning techniques are using data to create better results. It’s an approach that is scary to some people because it’s inherently un-debuggable. If you get the wrong result, you can’t simply check out each entry in a decision tree and fix the one that’s wrong.

But it’s got legs, particularly in the development of self-driving cars. So we don’t need to paint roads with special paint and maintain a huge global database of all roads and cars. Instead self-driving cars are going to use a collection of these machine learning techniques and algorithms in order to make the best guesses about how to drive each and every day.

Learn, iterate and improve. Scary? It shouldn’t be – because that’s exactly what we do as humans.

It’s a huge and fascinating field but the AlphaGo victory feels like an important bridge has been crossed, an inflection point when popular awareness coincided with a genuine step forward in the possibilities that the technology affords.

And of course, Google’s ultimate goal has never been to simply be better at winning games. Unless you define a game as being a challenge that is extremely difficult to beat. If so, then bring on the games – disease analysis, climate change modelling, the list is endless. When it comes to these contests, we might not expect them to be streamed live online. But as they increasingly become games that we have no option but to win, I’m pretty certain that the interest will be there.

AI and The Legal Profession

Working in the legal profession for well over a decade gave me a pretty good insight into the mindset of others in the industry – or, more specifically, their attitude towards change and innovation. What follows is necessarily a generalisation to some extent but I believe the observation remains no less valid because of that.

I vividly remember giving a talk as a trainee back in 1999 on why the invention of Napster’s file-sharing growth was so important. Most of my colleagues at the time looked on with barely-disguised expressions of confusion and boredom. It wasn’t the first time that I felt that my attitude towards adopting and experimenting with change made me feel very different to others within the industry over the course of 13+ years.

I was always a fan of Richard Susskind’s work over that time. For a man who predicted in his book “The Future of Law” in 1996 that email would become the predominant form of communication between lawyers and their clients (provoking a response from the Law Society of England and Wales along the lines that Susskind shouldn’t be allowed to speak in public because he clearly didn’t understand the way that the industry functioned and or the rules surrounding client confidentiality), he’s continued to push the industry kicking and screaming into the modern era throughout.

I just watched a talk that he gave over the summer in which he gave a 50 year view on the impact of AI and the law.

Susskind starts by setting out the four stages of resistance that he inevitably sees from members of the legal profession when faced with technological progress:-

  1. This is worthless nonsense
  2. This is an interesting but perverse point of view
  3. This is true but quite unimportant
  4. I have always said so.

And it was this first stage that reminded me of that Napster talk all those years ago. It’s reminiscent of Gandhi’s quote that gets bandied around frequently in Bitcoin circles when someone brings up the usual adoption hurdles:

“First they ignore you, then they laugh at you, then they fight you, then you win”

Susskind’s history is interesting because he actually built a so-called Expert System with a leading expert lawyer back at the end of the 1980’s – basically transferring a human’s knowledge and expertise into a computer system for others to use. Not an easy call to take “a dense web of barely intelligible interrelated rules” and turn that into 5 1/4 inch floppy disks. But the end result was a system that would ask you a series of questions before giving you an answer.

Then on 6 August 1991, the web happened. But still the law firms didn’t cotton on to the fact that the world was changing. And of course, why would they? When your business is built on an hourly billing model, what possible use could you have for an Expert System that reduces a process that usually takes 10 hours down to 10 minutes?

But of course the signs are now undeniable and change is inevitable. As he points out, this key paper from 2011 points out that in terms of initial document reviews, intelligent search systems can now outperform junior lawyers and paralegals. And remember – that’s the worst that that technology is ever going to be in the future.

I’ve always been drawn to Susskind’s simple argument which goes along these lines: following Moore’s Law, the average desktop computer in 2020 will have more processing power than the human brain. And in 2050, the average desktop machine machine will have more processing power than the whole of humanity put together. So it might just be time for the legal profession to accept that change is coming. It just cannot be that the internet, computer science, natural language processing, speech recognition, big data, intelligent inference, machine learning, speech synthesis and so much more is transforming every single corner of society and yet somehow this effect will not extend into the legal profession – which after all is, of course, one of the most information- and document-intensive professions in the world!

His conclusion is that by the 2020’s, we’ll have legal IT systems that are not modelled on brains (i.e. we’ll move away from modelling AI based on human intelligence alone), fuelled by brute force computing, utilising speech recognition, with real-time machine language translation, natural language processing, an ability to discern otherwise hidden legal risks through the analysis of big data, perfect search and a mixture of deductive, inductive, analogical and lateral inference.

Face-to-face legal consultations will become the exception rather than the rule and “communities of legal experience” will develop – networks within which ordinary people who have consulted lawyers or solved problems themselves will share their experiences with others who want to access that knowledge.

And yet, the majority of lawyers are still in a state of denial. Most believe that the current state of the industry represents little more than a temporary blip in the standard state of affairs before things return to normal, with an economy similar to the one that existed before 2007. Yet whilst some more successful ‘firms’ are looking at the disaggregation of legal work (using paralegals, offshoring, on-shoring etc.), the real disruption will come over the next decade when technologies will be able to do the work that we originally thought could only be done by “intelligent human beings”.

Of course, it’s very easy to criticise from the outside. Which is why innovation often comes from elsewhere. And, with technology, hindsight is always 20/20. But disruption is a certainty. In a world in which Google’s stated aim has always been “to organise the world’s information and make it universally accessible and useful”, the information that provides the foundation of value upon which the profession is built is gradually being made free. I can’t wait to see where we get to once AI really starts to kick in.

It’s not hard to imagine a demand for having IBM’s Watson as an app on your mobile dishing out legal advice whilst it also saves your life, is it?