Where Do We Go From Here?

The recent win by Google’s AlphaGo computer program in a 5-game Go tournament against the world’s top player for over a decade, Lee Sedol made headlines around the world.

And once you look past some of the more superficial tabloid predictions of imminent robot enslavement, you’ll find a number of intelligent and fascinating accounts detailing exactly why the event represents something of a technology landmark.

It’s worth digging into Google’s blog post for the background. Because this was not just another case of a computer learning how to win a board game. Nor was it a resumption of competition between man and machine following our previous defeats in chess (against Kasparov) and in Jeopardy (by Watson).

Complex Choices

Instead, the choice of game here is significant. Go is an ancient game with more possible legal board positions than there are number of atoms in the universe. In fact, we’ve only managed to calculate that number in 2016 after some 2,500 years. Why is this important? Because it means that a computer cannot possibly find the best options simply by brute-force guessing combinations. Building a system to index all possible moves in the game and then rely on the computer to look up the best move each time is simply not possible.

Instead, a successful Go player needs to use something that we can best understand as intuition. A human has to be able to act on no more than a feeling that one move is better than another – something that it was generally accepted that this was something that computers couldn’t do.

Turns out general opinion was wrong.

Self-Taught

By ‘simply’ learning 30 million possible moves played by human experts, the program showed that it could predict which move a human would make 57% of the time. But this would only go so far. To win, the AlphaGo algorithm needed to learn new strategies – by itself.

And it’s here that the outcome was stunning. During the games (live streamed online to massive audiences), the computer made certain moves that made no sense to Go experts. And yet (for the most part) they worked. As one commentator mentioned, this was, at some level, an alien intelligence learning to play the game by itself. And as another put it:

“..as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”

When it comes to AI, it’s particularly important to reign in the hyperbole. Playing Go in a way that’s unrecognisable to humans at times is hardly Skynet. But it’s fascinating to think that the program reached a level of expertise that surpassed the best human player in a way that no one really fully understands. You can’t point to where it’s better because the program teaches itself to improve incrementally as a consequence of billions of tiny adjustments made automatically.

Neural Networks: Patience Pays Off

The success of computer over man came from a combination of different, but complementary, forms of AI – not least of which were Neural Networks. After reading a little about the godfather of Deep LearningGeoff Hinton, and listening to an another excellent podcast from Andressen Horowitz, it turns out that the approach of using Neural Networks (at the heart of AlphaGo) was an A.I. method that was ridiculed as a failure for a number of years by fellow scientists, particularly in the 1980’s.

It Turns out that the concept was just been too far ahead of its time. As Chris Dixon points out in ‘What’s Next In Computing?‘, every significant new technology has a gestation period. But that often doesn’t sit easy when the hype cycle is pointing towards success being just around the corner. And as the bubble bursts, the impact of the delays on the progress of innovation are usually negative.

Nowhere has that been seen so clearly as within the field of Artificial Intelligence. Indeed, the promise has exceeded the reality so often that it has its own phrase in the industry – AI Winters – where both funding and interest fall off a cliff. Turns out that some complex things are, well, complex (as well as highly dependent on other pieces of the ecosystem to fall into place). So in the UK, the Lighthill Report in 1974 criticised the utter failure of AI to achieve its grandiose objectives, leading to university funding being slashed and restricting work to a few key centres (including my home city, Edinburgh).

Expert Systems: Data Triumphs

Thankfully, the work did continue with a few believers such as Hinton however. And whilst the evolution of AI research and progress is far outside this blog post, it’s interesting to see how things evolved. At one stage, Expert Systems were seen as the future (check out this talk by Richard Susskind for how this applied in the context of legal systems).

To simplify, this is a method by which you find a highly knowledgeable human in a specific field, ask them as many questions as possible, compile the answers into a decision tree and then hope that the computer is able to generate a similar result to that expert when you ask it a question. Only problem is that it turns out that this doesn’t really work too well in practice.

But thankfully, those other missing pieces of the ecosystem are now falling into place. With massive computation, bandwith and memory available at extremely low cost these days, those barriers have now fallen. Which has led to the evolution of Neural Networks from a theoretical, heavily criticised approach into something altogether far more respected and valuable.

Welcome to self-learning algorithms – algorithms that (in this case) teach themselves how to play Go better – but without asking a Go expert.

Neural Networks aren’t new in any way. They started as a mathematical theory of the brain but didn’t make much progress for 40 years. But with the barriers gone, we’re now seeing neural networks being piled on top of each other. And AI is improving significantly not because the algorithms themselves are getting better. It’s improving because we’re now able to push increasing volumes of data into models which can in turn use this data to build out a better model of what the answer should be.

Learning By Intuition & Iteration

Instead of trying to capture and codify all existing knowledge, deep learning techniques are using data to create better results. It’s an approach that is scary to some people because it’s inherently un-debuggable. If you get the wrong result, you can’t simply check out each entry in a decision tree and fix the one that’s wrong.

But it’s got legs, particularly in the development of self-driving cars. So we don’t need to paint roads with special paint and maintain a huge global database of all roads and cars. Instead self-driving cars are going to use a collection of these machine learning techniques and algorithms in order to make the best guesses about how to drive each and every day.

Learn, iterate and improve. Scary? It shouldn’t be – because that’s exactly what we do as humans.

It’s a huge and fascinating field but the AlphaGo victory feels like an important bridge has been crossed, an inflection point when popular awareness coincided with a genuine step forward in the possibilities that the technology affords.

And of course, Google’s ultimate goal has never been to simply be better at winning games. Unless you define a game as being a challenge that is extremely difficult to beat. If so, then bring on the games – disease analysis, climate change modelling, the list is endless. When it comes to these contests, we might not expect them to be streamed live online. But as they increasingly become games that we have no option but to win, I’m pretty certain that the interest will be there.

A.I. and Summoning The Demon

I'm sorry Dave. I can't do that.
I’m sorry Dave. I can’t do that.

“If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.” (Stephen Hawking)

Recent months have seen the debate around the future of artificial intelligence start to reach the mainstream press. No longer simply the preserve of sci-fi authors alone, there now appears to be more of a concerted effort being made to publicly co-ordinate research streams and inter-disciplinary expertise to see whether mankind really is, as Elon Musk suggests, “summoning the demon“.

Yesterday an open letter was published by the Future of Life Institute to publicise a pledge by top experts around the globe to coordinate progress in the field of A.I for the benefit of mankind. It was published in association with a research document which highlights a few areas that researchers should be focusing on in order to achieve this goal over time. In short, the argument is that work should be directed towards maximising the societal benefit of A.I. instead of focusing on simply increasing the capabilities of A.I. alone.

As the letter says: ” Our AI systems must do what we want them to do”.

FLI’s Research Areas

As small improvements are made, the potential monetary value of each step forward in this area could be significant, prompting growing investment into research in turn. But that’s hardly surprising – given the fact that the entirety of the civilisation that we know today is the product of human intelligence, the potential benefits of A.I. (which after all is simply intelligence magnified at scale) could easily be far beyond our current imagination. Research should be directed to ensure that there is significant societal benefit derived from the powers that are harnessed.

When it comes to short-term areas of interest, the FLI suggest the following:-

  • Assess the impact of A.I. on employment and the potential disruption that it might bring.
  • Consider how to deal with the displaced employees who may no longer have a job with the advent of such technology.
  • Develop frameworks for the exploration of legal and ethical questions by:
    • involving the expertise of computer scientists, legal experts, policy experts and ethicists;
    • drafting a set of machine ethics (presumably on a global, as opposed to national, basis);
    • considering the impact of autonomous weapons and what having “meaningful human control” actually represents;
    • assessing the extent to which AI will breach privacy and be able to snoop on our data and general activities.
  • Ensure that all short-term A.I. research focuses on:
    • verification – build confidence that machines will act in certain ways, particularly in safety critical situations;
    • validity – a robot that hoovers up dirt before simply dumping it and repeating may be efficient but is of little benefit to mankind;
    • security – as A.I becomes more prevalent, it’s increasingly likely that it will be targetted in cyber-attacks;
    • control – determine what level of human control is necessary or simply efficient (e.g. when sharing tasks with machines).

Over the longer-term, the suggestion is that research should look into such issues in light of the potential that A.I. has to evolve such that a system starts to actually learn from its experiences. This introduces the concept of an intelligence explosion – in effect, the way that a system can modify, extend or improve itself, possibly many times in succession. In many ways, it is this idea that represents the demon that Musk, Hawking and others warn us about in such stark terms. As Stanford’s 100 Year Study Of Artificial Intelligence points out:

“We could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes….such powerful systems would threaten humanity”

Don’t Worry (Yet)

It’s worth noting that there are also plenty of voices who maintain that the singularity is not that near. There is a huge difference between so-called ‘narrow’ AI (intelligence that enables certain specific tasks to be carried out, such as autonomous cars) which tend to have fairly short timelines to success and the much harder ‘wider’ or general AI (machines with intelligence that replicates human intelligence).

As Ben Medlock of SwiftKey points out in a recent article, the field of artificial intelligence is characterised by over-optimism when it comes to timescales because we always underestimate the complexity of both the the natural world and the mind. As he points out, to surpass human intelligence, a truly intelligent machine must surely inhabit a body of sorts, just like a human, so that it can experience and interact with the world in meaningful ways from which it can learn. This concept of “embodied cognition” is remains a long way off.

On one hand, it’s clear that the narrow AI is becoming more common. We’re all seeing the evidence on our smartphones and in technologies that are starting to appear around us. No doubt this will be accelerated by a combination of the internet of things, the final move to the cloud and the evolution of powerful algorithms that will naturally develop in accuracy with the related upsurge in available data being collected. But the self-optimising artificial intelligence which evolves at a pace far beyond that of mankind’s biological restraints remains an issue that is firmly to be dealt with in the future.

The key thing now however is that the debate has evolved from being a topic for debate amongst academics alone. And in light of the vast potential that such technologies bring towards solving some of the biggest issues that we face, including everything from the eradication of disease to the prevention of global warming, whilst also representing what might very well turn out to be the greatest existential threat mankind has ever faced, there’s no doubt that that’s a good thing.

Robots: Seeking Jobs Or World Domination?

Robot from Edinburgh University
Dark The Robot is a very friendly chap

I’ve always been interested in robots. I don’t know who’s to blame – R2D2, Twiki or the Gunslinger. I have a soft spot for books like ‘Robopocalypse‘ and actively seek out discussions about how long we have to wait until we hit the technological singularity. So when I was asked by the Beltane Public Engagement Network (thanks Sarah!) to go along to one of their events titled ‘Robots Rise‘, it’s fair to say there wasn’t too much arm-twisting going on.

Robotic Historic

At first, the idea of discussing robots in the lavish and dated wooden and mirrored surroundings of The Famous Spiegeltent felt slightly surreal. Now I realise it was ideal. Why? The Spiegeltent was built in 1920 – exactly the same year that the word ‘robot’ was used for the first time ever in a play called ‘R.U.R. (Rossum’s Universal Robots)‘ by Karel Capek.

The word ‘robot’ in Capek’s native Czech means the forced labour that serfs were required to carry out on their master’s lands. Of course, in fiction, robots often appear as metaphors for human problems, whether slavery or racism. But as they become increasingly visible in society, will we end up teaching them such concepts as cruelty – or will they be capable of learning such flaws themselves? In essence, how human do we actually want our artificial intelligence to be?

The session was led by Subramanian Ramamoorthy, Lecturer in Robotics at Edinburgh University who gave his expert views on how far robots are already intertwined with our daily lives and how much further that’s likely to develop. A fascinating chat, here’s what I took away from the session:

When Will Robots Take Over The World?

Let’s cut to the chase and start with the million dollar question.

The answer? Not any time soon. I get the sense that it’s a question that researchers get asked way too often.There’s various reasons why robots actually taking over the world is unlikely but high up there on the list is the simple fact that there’s no logical reason that they’d want to. Even humans don’t seek world domination (well, most of us). And, even if they did change their minds, their batteries wouldn’t last (honestly).

Interestingly, many people seem to assume that robots will develop some malevolent intention as they evolve – perhaps a view that’s been heavily influenced by Hollywood (e.g. Skynet). Yet the reality is that most developments in robotics currently focus on assistive, rather than disruptive, technologies. The most obvious future uses of robotics involve helping humans to carry out manual and repetitive tasks (for example, cleaning cups) or remote exploration, for example.

Still, despite all of the evidence to the contrary by the experts, I still find it hard to ignore the march of progress under Moore’s Law and this animated graphic which shows just how long until computers will have the same power as the human brain. Makes you think, doesn’t it?

Will Robots Take Our Jobs?

At one level, it’s already happening. If you’ve ever ordered a book from Amazon, it’s likely to have been physically selected for you in the warehouses by a robot. Amazon didn’t pay $775 million to buy Kiva Systems Inc. last year for nothing. Returning to their charging stations automatically, these 24-hour workers won’t be asking for a coffee break any time soon.

Of course, there is always the possibility of unrest if robots displace vast numbers of workers. But in many ways, the more interesting question is how this technology could be applied to complement existing human roles. Consider how we currently search for a missing person for example. If there’s no trace found, it may be very difficult to justify the cost of a policeman searching a remote location for an extended period of time. But the cost/benefit analysis of asking a robot to carry out the task for an extended period of time may look entirely different.

For example, it’s not hard to see how any army will be able to make use of these (don’t worry, you’re not alone if you start to get a little creeped out by progress here):-

Robots In Space

Robots have been up in orbit for a while. But far from simply replicating fiction, it’s useful to understand why they’re actually required. Whilst an astronaut’s job might appear glamorous, the reality is that much of the daily routine is just that – a mass of repetitive boring jobs. I suspect few astronauts dreamed that the spectrum of tasks that they’d be required to carry out when pushing the boundaries of mankind would involve quite so many requests to empty the toilet on the International Space Station…

Robots are great at the manual tasks. Robonaut 2 is by all accounts carrying out a great job on the ISS and what’s more, he’s pretty funny on Twitter too (@AstroRobonaut).

But Why Focus on Humanoid Shapes?

The question was asked why we seem to be focusing on building more humanoid robot shapes than purpose-built structures. It’s clear that having a cute wee fella that speaks to you like Dark the Robot (pictured above) on a stage brings a favourable response that gets people talking. It’s almost PR for the field as it entices people into learning more about the subject.

There seem to be different lines of thought on this topic and the question of whether we are focusing on developing humanoid robots too readily is a source of real debate within the robotic community that’s likely to continue.

Misconceptions?

Those of us who live outwith the rarefied circles of AI/robotic research but within ready reach of great films appear to have an overly-optimistic assumption about the current rate of progress. Continued developments enable us to continually improve but the evolution of our robotic abilities still lags behind when compared to the development of a human child, for example. Progress is being made but it’s important to remember that in general, we’re still only able to teach robots how to carry out certain tasks with effort – we might have developed a robot that learned how to fold towels but it’s still taking 25 minutes per towel.

An Ethical Stramash?

Surprisingly not, for the most part. Despite only making incremental advances in the development of robotics, those involved in robotics apparently get asked questions about ethics frequently, way before any such issues could be faced. The reality is that, except in very specific areas (such as medical technology), researchers are still a long way from having to really tackle particularly taxing ethical problems.

So What Does The Future Hold?

Good question. Everything. And yet, many important limitations remain.

When you actually see a robot in the flesh (so to speak) as I did on Friday, I couldn’t help but be struck again by precisely how again complex they really are. OK, so we might laugh at their basic footballing skills, but the reality is that the work that’s taken place to get to that stage is incredible.

The reason that any robot ever moves is down to a complex combination of factors involving software programming and hardware – every joint contains a motor, that is activated by programming in combination the use of its other senses, such as vision (identifying colour and shape), touch sensors, accelerometers and the use of sonar, amongst others. Putting all that together so that it works as intended is no small task, to say the least.

One of the stated goals of the Robot World Cup is to evolve the technology so that a team of robot footballers can actually defeat the human World Cup winners by 2050. Is it likely? I don’t see why not when you take a look at the most recent robot from DARPA.

Or Is The Future Already Here?

If you really think about it, we’re actually pretty far down the line in some ways already. Estimates state that by the end of 2013, there will be one smartphone for every five people in the world. To recycle the often-repeated statement, every single one of those has processing power far in advance of that used by the Apollo moon landing programme (as an aside, I just found out that you can actually build your own working replica NASA Apollo Landing Computer if you’ve got both the inclination and a spare $3,000).

Then consider what Google and the other search engines are accomplishing by indexing the word’s information. Start to tie that data in with what might be capable via wearable technology such as Google Glass and you really start to get a glimpse of the future.

For now, it seems that the field is focused on building fundamentally better robots (physically) whilst improving the existing skills of interaction (via programming advances). We’re still a long way away from developing robots that are self-powered with the ability to repair themselves at will. But whatever the evidence to the contrary, I can’t help but think that this is another area where things are just going to accelerate in the future.

It’s a fascinating topic. I’d love to fast-forward twenty years and revisit this post again. But in the meantime, I’ll leave you with one thought.

Rapidly ageing population of the world – meet ASIMO.

@dugcampbell