“If a superior alien civilisation sent us a message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here – we’ll leave the lights on”? Probably not – but this is more or less what is happening with AI.” (Stephen Hawking)
Recent months have seen the debate around the future of artificial intelligence start to reach the mainstream press. No longer simply the preserve of sci-fi authors alone, there now appears to be more of a concerted effort being made to publicly co-ordinate research streams and inter-disciplinary expertise to see whether mankind really is, as Elon Musk suggests, “summoning the demon“.
Yesterday an open letter was published by the Future of Life Institute to publicise a pledge by top experts around the globe to coordinate progress in the field of A.I for the benefit of mankind. It was published in association with a research document which highlights a few areas that researchers should be focusing on in order to achieve this goal over time. In short, the argument is that work should be directed towards maximising the societal benefit of A.I. instead of focusing on simply increasing the capabilities of A.I. alone.
As the letter says: ” Our AI systems must do what we want them to do”.
FLI’s Research Areas
As small improvements are made, the potential monetary value of each step forward in this area could be significant, prompting growing investment into research in turn. But that’s hardly surprising – given the fact that the entirety of the civilisation that we know today is the product of human intelligence, the potential benefits of A.I. (which after all is simply intelligence magnified at scale) could easily be far beyond our current imagination. Research should be directed to ensure that there is significant societal benefit derived from the powers that are harnessed.
When it comes to short-term areas of interest, the FLI suggest the following:-
- Assess the impact of A.I. on employment and the potential disruption that it might bring.
- Consider how to deal with the displaced employees who may no longer have a job with the advent of such technology.
- Develop frameworks for the exploration of legal and ethical questions by:
- involving the expertise of computer scientists, legal experts, policy experts and ethicists;
- drafting a set of machine ethics (presumably on a global, as opposed to national, basis);
- considering the impact of autonomous weapons and what having “meaningful human control” actually represents;
- assessing the extent to which AI will breach privacy and be able to snoop on our data and general activities.
- Ensure that all short-term A.I. research focuses on:
- verification – build confidence that machines will act in certain ways, particularly in safety critical situations;
- validity – a robot that hoovers up dirt before simply dumping it and repeating may be efficient but is of little benefit to mankind;
- security – as A.I becomes more prevalent, it’s increasingly likely that it will be targetted in cyber-attacks;
- control – determine what level of human control is necessary or simply efficient (e.g. when sharing tasks with machines).
Over the longer-term, the suggestion is that research should look into such issues in light of the potential that A.I. has to evolve such that a system starts to actually learn from its experiences. This introduces the concept of an intelligence explosion – in effect, the way that a system can modify, extend or improve itself, possibly many times in succession. In many ways, it is this idea that represents the demon that Musk, Hawking and others warn us about in such stark terms. As Stanford’s 100 Year Study Of Artificial Intelligence points out:
“We could one day lose control of AI systems via the rise of superintelligences that do not act in accordance with human wishes….such powerful systems would threaten humanity”
Don’t Worry (Yet)
It’s worth noting that there are also plenty of voices who maintain that the singularity is not that near. There is a huge difference between so-called ‘narrow’ AI (intelligence that enables certain specific tasks to be carried out, such as autonomous cars) which tend to have fairly short timelines to success and the much harder ‘wider’ or general AI (machines with intelligence that replicates human intelligence).
As Ben Medlock of SwiftKey points out in a recent article, the field of artificial intelligence is characterised by over-optimism when it comes to timescales because we always underestimate the complexity of both the the natural world and the mind. As he points out, to surpass human intelligence, a truly intelligent machine must surely inhabit a body of sorts, just like a human, so that it can experience and interact with the world in meaningful ways from which it can learn. This concept of “embodied cognition” is remains a long way off.
On one hand, it’s clear that the narrow AI is becoming more common. We’re all seeing the evidence on our smartphones and in technologies that are starting to appear around us. No doubt this will be accelerated by a combination of the internet of things, the final move to the cloud and the evolution of powerful algorithms that will naturally develop in accuracy with the related upsurge in available data being collected. But the self-optimising artificial intelligence which evolves at a pace far beyond that of mankind’s biological restraints remains an issue that is firmly to be dealt with in the future.
The key thing now however is that the debate has evolved from being a topic for debate amongst academics alone. And in light of the vast potential that such technologies bring towards solving some of the biggest issues that we face, including everything from the eradication of disease to the prevention of global warming, whilst also representing what might very well turn out to be the greatest existential threat mankind has ever faced, there’s no doubt that that’s a good thing.