Where Do We Go From Here?

The recent win by Google’s AlphaGo computer program in a 5-game Go tournament against the world’s top player for over a decade, Lee Sedol made headlines around the world.

And once you look past some of the more superficial tabloid predictions of imminent robot enslavement, you’ll find a number of intelligent and fascinating accounts detailing exactly why the event represents something of a technology landmark.

It’s worth digging into Google’s blog post for the background. Because this was not just another case of a computer learning how to win a board game. Nor was it a resumption of competition between man and machine following our previous defeats in chess (against Kasparov) and in Jeopardy (by Watson).

Complex Choices

Instead, the choice of game here is significant. Go is an ancient game with more possible legal board positions than there are number of atoms in the universe. In fact, we’ve only managed to calculate that number in 2016 after some 2,500 years. Why is this important? Because it means that a computer cannot possibly find the best options simply by brute-force guessing combinations. Building a system to index all possible moves in the game and then rely on the computer to look up the best move each time is simply not possible.

Instead, a successful Go player needs to use something that we can best understand as intuition. A human has to be able to act on no more than a feeling that one move is better than another – something that it was generally accepted that this was something that computers couldn’t do.

Turns out general opinion was wrong.

Self-Taught

By ‘simply’ learning 30 million possible moves played by human experts, the program showed that it could predict which move a human would make 57% of the time. But this would only go so far. To win, the AlphaGo algorithm needed to learn new strategies – by itself.

And it’s here that the outcome was stunning. During the games (live streamed online to massive audiences), the computer made certain moves that made no sense to Go experts. And yet (for the most part) they worked. As one commentator mentioned, this was, at some level, an alien intelligence learning to play the game by itself. And as another put it:

“..as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”

When it comes to AI, it’s particularly important to reign in the hyperbole. Playing Go in a way that’s unrecognisable to humans at times is hardly Skynet. But it’s fascinating to think that the program reached a level of expertise that surpassed the best human player in a way that no one really fully understands. You can’t point to where it’s better because the program teaches itself to improve incrementally as a consequence of billions of tiny adjustments made automatically.

Neural Networks: Patience Pays Off

The success of computer over man came from a combination of different, but complementary, forms of AI – not least of which were Neural Networks. After reading a little about the godfather of Deep LearningGeoff Hinton, and listening to an another excellent podcast from Andressen Horowitz, it turns out that the approach of using Neural Networks (at the heart of AlphaGo) was an A.I. method that was ridiculed as a failure for a number of years by fellow scientists, particularly in the 1980’s.

It Turns out that the concept was just been too far ahead of its time. As Chris Dixon points out in ‘What’s Next In Computing?‘, every significant new technology has a gestation period. But that often doesn’t sit easy when the hype cycle is pointing towards success being just around the corner. And as the bubble bursts, the impact of the delays on the progress of innovation are usually negative.

Nowhere has that been seen so clearly as within the field of Artificial Intelligence. Indeed, the promise has exceeded the reality so often that it has its own phrase in the industry – AI Winters – where both funding and interest fall off a cliff. Turns out that some complex things are, well, complex (as well as highly dependent on other pieces of the ecosystem to fall into place). So in the UK, the Lighthill Report in 1974 criticised the utter failure of AI to achieve its grandiose objectives, leading to university funding being slashed and restricting work to a few key centres (including my home city, Edinburgh).

Expert Systems: Data Triumphs

Thankfully, the work did continue with a few believers such as Hinton however. And whilst the evolution of AI research and progress is far outside this blog post, it’s interesting to see how things evolved. At one stage, Expert Systems were seen as the future (check out this talk by Richard Susskind for how this applied in the context of legal systems).

To simplify, this is a method by which you find a highly knowledgeable human in a specific field, ask them as many questions as possible, compile the answers into a decision tree and then hope that the computer is able to generate a similar result to that expert when you ask it a question. Only problem is that it turns out that this doesn’t really work too well in practice.

But thankfully, those other missing pieces of the ecosystem are now falling into place. With massive computation, bandwith and memory available at extremely low cost these days, those barriers have now fallen. Which has led to the evolution of Neural Networks from a theoretical, heavily criticised approach into something altogether far more respected and valuable.

Welcome to self-learning algorithms – algorithms that (in this case) teach themselves how to play Go better – but without asking a Go expert.

Neural Networks aren’t new in any way. They started as a mathematical theory of the brain but didn’t make much progress for 40 years. But with the barriers gone, we’re now seeing neural networks being piled on top of each other. And AI is improving significantly not because the algorithms themselves are getting better. It’s improving because we’re now able to push increasing volumes of data into models which can in turn use this data to build out a better model of what the answer should be.

Learning By Intuition & Iteration

Instead of trying to capture and codify all existing knowledge, deep learning techniques are using data to create better results. It’s an approach that is scary to some people because it’s inherently un-debuggable. If you get the wrong result, you can’t simply check out each entry in a decision tree and fix the one that’s wrong.

But it’s got legs, particularly in the development of self-driving cars. So we don’t need to paint roads with special paint and maintain a huge global database of all roads and cars. Instead self-driving cars are going to use a collection of these machine learning techniques and algorithms in order to make the best guesses about how to drive each and every day.

Learn, iterate and improve. Scary? It shouldn’t be – because that’s exactly what we do as humans.

It’s a huge and fascinating field but the AlphaGo victory feels like an important bridge has been crossed, an inflection point when popular awareness coincided with a genuine step forward in the possibilities that the technology affords.

And of course, Google’s ultimate goal has never been to simply be better at winning games. Unless you define a game as being a challenge that is extremely difficult to beat. If so, then bring on the games – disease analysis, climate change modelling, the list is endless. When it comes to these contests, we might not expect them to be streamed live online. But as they increasingly become games that we have no option but to win, I’m pretty certain that the interest will be there.

Looking Far Off Into A Sci-Fi Future

I’ve written before about the positive effects that science fiction as a genre can have on the advancement of technology. By thinking far enough into the future, explaining the details of how mankind will overcome current technological hurdles becomes far less important for most writers than thinking about the knock-on effects that these changes will have on the humans that inhabit that society (however that evolves).

I read a great post today by Tiago Forte (‘What I Learned About The Future By Reading 100 Science Fiction Books‘) that focuses precisely on this point. Here’s a few takeaways:-

  • If you want to move the species forwards, you’re not going to find inspiration simply by reading the same material online as everyone else.
  • Mankind will inevitably evolve in a manner that will cause divisions once we are forced to colonise places beyond the Earth as individuals become exposed to a vast variety of differing external stimuli according to their location and for periods measured in years, not days (such as gravity, radiation, gene pools within a distant settlement etc)
  • Once we start to travel immense distances, time will radically change everything – those who embark on an epic journey are unlikely to be the first to arrive at their destination because technology will advance during their absence that means that others will leave later but get there sooner.
  • In the same way, technology will become outdated even more quickly – one great example is the 4 megapixel camera in use on the Rosetta spacecraft that was launched back in 2004 that is now lower quality than your average mobile phone camera today.
  • When we reach the singularity, the chances are that the ‘wide’ AI that develops will not be precoccupied solely with solving the problems that we believe need to be solved today. Instead, it will likely start to seek answers to issues that we can neither comprehend nor have the language to describe currently.

Sure, these aren’t issues that are knocking on the door demanding a solution today. But progress is inevitable and it’ll be fascinating to see how things pan out (no doubt virtually, given the fact that we’re talking about a time well after ours when blogs such as this are no more than a random historical artefact).

 

A.I. and Summoning The Demon

I'm sorry Dave. I can't do that.
I’m sorry Dave. I can’t do that.

“If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.” (Stephen Hawking)

Recent months have seen the debate around the future of artificial intelligence start to reach the mainstream press. No longer simply the preserve of sci-fi authors alone, there now appears to be more of a concerted effort being made to publicly co-ordinate research streams and inter-disciplinary expertise to see whether mankind really is, as Elon Musk suggests, “summoning the demon“.

Yesterday an open letter was published by the Future of Life Institute to publicise a pledge by top experts around the globe to coordinate progress in the field of A.I for the benefit of mankind. It was published in association with a research document which highlights a few areas that researchers should be focusing on in order to achieve this goal over time. In short, the argument is that work should be directed towards maximising the societal benefit of A.I. instead of focusing on simply increasing the capabilities of A.I. alone.

As the letter says: ” Our AI systems must do what we want them to do”.

FLI’s Research Areas

As small improvements are made, the potential monetary value of each step forward in this area could be significant, prompting growing investment into research in turn. But that’s hardly surprising – given the fact that the entirety of the civilisation that we know today is the product of human intelligence, the potential benefits of A.I. (which after all is simply intelligence magnified at scale) could easily be far beyond our current imagination. Research should be directed to ensure that there is significant societal benefit derived from the powers that are harnessed.

When it comes to short-term areas of interest, the FLI suggest the following:-

  • Assess the impact of A.I. on employment and the potential disruption that it might bring.
  • Consider how to deal with the displaced employees who may no longer have a job with the advent of such technology.
  • Develop frameworks for the exploration of legal and ethical questions by:
    • involving the expertise of computer scientists, legal experts, policy experts and ethicists;
    • drafting a set of machine ethics (presumably on a global, as opposed to national, basis);
    • considering the impact of autonomous weapons and what having “meaningful human control” actually represents;
    • assessing the extent to which AI will breach privacy and be able to snoop on our data and general activities.
  • Ensure that all short-term A.I. research focuses on:
    • verification – build confidence that machines will act in certain ways, particularly in safety critical situations;
    • validity – a robot that hoovers up dirt before simply dumping it and repeating may be efficient but is of little benefit to mankind;
    • security – as A.I becomes more prevalent, it’s increasingly likely that it will be targetted in cyber-attacks;
    • control – determine what level of human control is necessary or simply efficient (e.g. when sharing tasks with machines).

Over the longer-term, the suggestion is that research should look into such issues in light of the potential that A.I. has to evolve such that a system starts to actually learn from its experiences. This introduces the concept of an intelligence explosion – in effect, the way that a system can modify, extend or improve itself, possibly many times in succession. In many ways, it is this idea that represents the demon that Musk, Hawking and others warn us about in such stark terms. As Stanford’s 100 Year Study Of Artificial Intelligence points out:

“We could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes….such powerful systems would threaten humanity”

Don’t Worry (Yet)

It’s worth noting that there are also plenty of voices who maintain that the singularity is not that near. There is a huge difference between so-called ‘narrow’ AI (intelligence that enables certain specific tasks to be carried out, such as autonomous cars) which tend to have fairly short timelines to success and the much harder ‘wider’ or general AI (machines with intelligence that replicates human intelligence).

As Ben Medlock of SwiftKey points out in a recent article, the field of artificial intelligence is characterised by over-optimism when it comes to timescales because we always underestimate the complexity of both the the natural world and the mind. As he points out, to surpass human intelligence, a truly intelligent machine must surely inhabit a body of sorts, just like a human, so that it can experience and interact with the world in meaningful ways from which it can learn. This concept of “embodied cognition” is remains a long way off.

On one hand, it’s clear that the narrow AI is becoming more common. We’re all seeing the evidence on our smartphones and in technologies that are starting to appear around us. No doubt this will be accelerated by a combination of the internet of things, the final move to the cloud and the evolution of powerful algorithms that will naturally develop in accuracy with the related upsurge in available data being collected. But the self-optimising artificial intelligence which evolves at a pace far beyond that of mankind’s biological restraints remains an issue that is firmly to be dealt with in the future.

The key thing now however is that the debate has evolved from being a topic for debate amongst academics alone. And in light of the vast potential that such technologies bring towards solving some of the biggest issues that we face, including everything from the eradication of disease to the prevention of global warming, whilst also representing what might very well turn out to be the greatest existential threat mankind has ever faced, there’s no doubt that that’s a good thing.

The Year Ahead

I’ve been thinking about how to evolve this blog over the next twelve months. It originally started out a few years back as a place for me to post more considered, long-form articles that went into some depth on topics that fascinated me.

That worked for a while – in the sense that I enjoyed writing and received complimentary feedback from various quarters. But as someone far more productive than me (in making quotable statements, if nothing else) once wrote, “The perfect is the enemy of the good“. The reality is that whilst the more detailed and comprehensive articles may attract decent levels of interest online, the extra effort required to polish up that final 20% slows down the frequency of posts.

But doesn’t quality beat quantity? Usually – but with one caveat. Regular practice inevitably improves quality and writing should be no different. At the same time, I’ve always found that the process of moving knowledge from head to screen using your own words is the most powerful learning technique there is.

So I took the decision to just relax a little more in each post and to just write more frequently – every day – more broadly about the topics that interested me. The logic’s pretty simple. Even if I turn out to be the only one out there that enjoys these topics, at least I’ll enjoy looking back over some of the posts in the years to come and see just how far some of the thinking has evolved.

This year, I’ll keep that approach going. I’m hoping to redesign the site in the near future to make things cleaner and easier to read, particularly on mobile platforms. And as for the topics themselves, I don’t think they’ll be of any surprise to those that have visited before.

The key theme will inevitably be Bitcoin and associated block chain technologies. But on top of that, the other areas under the spotlight this year will likely be data security, surveillance, drone technology, 3D printing, the internet of things, networks, startups, VC investment, AI, the coming singularity and last, but by no means least, how traditional forms of creativity can not only survive but thrive in a digital world.

I’m guessing that’ll keep me pretty busy for the next 364 days.

AI and The Legal Profession

Working in the legal profession for well over a decade gave me a pretty good insight into the mindset of others in the industry – or, more specifically, their attitude towards change and innovation. What follows is necessarily a generalisation to some extent but I believe the observation remains no less valid because of that.

I vividly remember giving a talk as a trainee back in 1999 on why the invention of Napster’s file-sharing growth was so important. Most of my colleagues at the time looked on with barely-disguised expressions of confusion and boredom. It wasn’t the first time that I felt that my attitude towards adopting and experimenting with change made me feel very different to others within the industry over the course of 13+ years.

I was always a fan of Richard Susskind’s work over that time. For a man who predicted in his book “The Future of Law” in 1996 that email would become the predominant form of communication between lawyers and their clients (provoking a response from the Law Society of England and Wales along the lines that Susskind shouldn’t be allowed to speak in public because he clearly didn’t understand the way that the industry functioned and or the rules surrounding client confidentiality), he’s continued to push the industry kicking and screaming into the modern era throughout.

I just watched a talk that he gave over the summer in which he gave a 50 year view on the impact of AI and the law.

Susskind starts by setting out the four stages of resistance that he inevitably sees from members of the legal profession when faced with technological progress:-

  1. This is worthless nonsense
  2. This is an interesting but perverse point of view
  3. This is true but quite unimportant
  4. I have always said so.

And it was this first stage that reminded me of that Napster talk all those years ago. It’s reminiscent of Gandhi’s quote that gets bandied around frequently in Bitcoin circles when someone brings up the usual adoption hurdles:

“First they ignore you, then they laugh at you, then they fight you, then you win”

Susskind’s history is interesting because he actually built a so-called Expert System with a leading expert lawyer back at the end of the 1980’s – basically transferring a human’s knowledge and expertise into a computer system for others to use. Not an easy call to take “a dense web of barely intelligible interrelated rules” and turn that into 5 1/4 inch floppy disks. But the end result was a system that would ask you a series of questions before giving you an answer.

Then on 6 August 1991, the web happened. But still the law firms didn’t cotton on to the fact that the world was changing. And of course, why would they? When your business is built on an hourly billing model, what possible use could you have for an Expert System that reduces a process that usually takes 10 hours down to 10 minutes?

But of course the signs are now undeniable and change is inevitable. As he points out, this key paper from 2011 points out that in terms of initial document reviews, intelligent search systems can now outperform junior lawyers and paralegals. And remember – that’s the worst that that technology is ever going to be in the future.

I’ve always been drawn to Susskind’s simple argument which goes along these lines: following Moore’s Law, the average desktop computer in 2020 will have more processing power than the human brain. And in 2050, the average desktop machine machine will have more processing power than the whole of humanity put together. So it might just be time for the legal profession to accept that change is coming. It just cannot be that the internet, computer science, natural language processing, speech recognition, big data, intelligent inference, machine learning, speech synthesis and so much more is transforming every single corner of society and yet somehow this effect will not extend into the legal profession – which after all is, of course, one of the most information- and document-intensive professions in the world!

His conclusion is that by the 2020’s, we’ll have legal IT systems that are not modelled on brains (i.e. we’ll move away from modelling AI based on human intelligence alone), fuelled by brute force computing, utilising speech recognition, with real-time machine language translation, natural language processing, an ability to discern otherwise hidden legal risks through the analysis of big data, perfect search and a mixture of deductive, inductive, analogical and lateral inference.

Face-to-face legal consultations will become the exception rather than the rule and “communities of legal experience” will develop – networks within which ordinary people who have consulted lawyers or solved problems themselves will share their experiences with others who want to access that knowledge.

And yet, the majority of lawyers are still in a state of denial. Most believe that the current state of the industry represents little more than a temporary blip in the standard state of affairs before things return to normal, with an economy similar to the one that existed before 2007. Yet whilst some more successful ‘firms’ are looking at the disaggregation of legal work (using paralegals, offshoring, on-shoring etc.), the real disruption will come over the next decade when technologies will be able to do the work that we originally thought could only be done by “intelligent human beings”.

Of course, it’s very easy to criticise from the outside. Which is why innovation often comes from elsewhere. And, with technology, hindsight is always 20/20. But disruption is a certainty. In a world in which Google’s stated aim has always been “to organise the world’s information and make it universally accessible and useful”, the information that provides the foundation of value upon which the profession is built is gradually being made free. I can’t wait to see where we get to once AI really starts to kick in.

It’s not hard to imagine a demand for having IBM’s Watson as an app on your mobile dishing out legal advice whilst it also saves your life, is it?

 

Global Trends in 2015

Like many, I’m a sucker for those “look take a look at what’s on the horizon” type of posts. December’s always the month that these things start to really appear with a vengeance and the presentation from Global Trends is as good a place to start as any.

Here’s a few areas that stood out to me. I’ve partly picked these out of general interest but mainly it’s because I think that they reinforce a few of my own thoughts about key themes that every entrepreneur who’s looking to build a business in a growth area should at least be aware of.

1. Images hold more power than text

Not news in itself but it does reinforce this renewed interest in visual communications technologies (Oculus Rift, anyone?). In some cases, the technologies are developing to help us interact with the real world but in others, virtual reality will continue to gain traction.

2. The data security risk will worsen

Breaches will continue to happen more regularly and with increasingly serious repercussions. The problem will worsen significantly as the rapid advance of the Internet of Things eats up all manner of additional personal data. The battles for privacy, freedom and security will continue online, with the defence being strengthened I have no doubt by the hard work being carried out by my friends at MaidSafe.

At the same time, efforts made to access and index the Deep Web will continue apace. Those parts of the world wide web that are not indexed by conventional search engines contain a vast quantity of data that is several orders of magnitude larger than the surface web and remains untapped. A potential treasure trove of information appeals to many around the world for different reasons. But the potential to use such ‘big data’ to develop new solutions to existing problems is likely to attract people looking for new opportunities.

3. The battle for increasingly scarce resources heats up

As we collectively leave our own individual trails digital exhaust to be exploited by marketers and identity thieves, the demand (and, on the flip side, reward) for reducing waste by combining recycling, green energy and new business models will drive new ideas within what is being called the Circular Economy.

4. The rise of impact capitalism

More people will seek to invest their money in a way that delivers a measurable environmental and social impact, and not simply just a financial return. Crowdfunding and peer-to-peer lending models will continuing to grow apace in order to tackle significant challenges that the market has previously ignored when left to its own devices.

My favourite example here has to be Watsi, the Y Combinator company that’s created a healthcare crowdfunding platform that lets you directly fund medical care for individuals in developing countries.

5. Working life continues to evolve

More people are responding to high unemployment levels with an entrepreneurial mindset, increasingly out of necessity. Self-employment and ‘on-demand’ working patterns will become commonplace with larger businesses becoming more open in general to collaboration with individuals.

6. Healthcare changes

More progress will be made this year in the field of personalised medicine designed for individual use, the tracking of health indicators using data captured by wearable devices that will increase diagnostic efficiency and the continued integration of robots (albeit without the AI of science fiction novels at this stage) into care and treatment cycles. Out of many fascinating areas, early-stage experiments into bioprinting will continue – using 3D printing technology to print body parts.

7. The rise of the city state

With the continued growth in population putting ever-increasing pressure on food and electricity supplies, cities are facing rapidly expanding populations – and therefore growing in importance as they seek to protect and provide for the demands of their residents. With more than 50% of the world’s population living in cities (there are currently 28 megacities with over 10 million people), the mayors who rule these cities will find their roles increase in significance. At the very least, they’ll have a significant role to play in tackling global issues, as cities currently produce 70% of all global harmful CO2 emissions in a world that needs such levels reduced drastically within the next 20 years.

8. Intelligence

And of course the debate will continue over whether robots are going to continue to close the gap on humans – and the existential threat to humanity that this development in AI may cause. Many of the predictions in the deck are likely to come to fruition well beyond 2015 but the meshing of machine intelligence with neuroscience, genomics and biotechnology provide no end of ideas for researchers and computer scientists to continue their collaborations.