Dug Campbell

Digital Ethics in a Connected World

The last couple of months have seen high profile figures in technology painting various ominous visions of Artifical Intelligence bringing about the end of humanity. It’s a fascinating topic but I’ve avoided writing anything on it yet as I feel it’s a nuanced topic that deserves a little bit more detail than simply dealing with it in one of my daily blog posts.

Interestingly though. the topic rears its head once again in this video that I watched today by Gerd Leonhard from TEDx Brussels. Gerd’s a renowned futurist who I’ve been following since he released “The Future Of Music“.

The questions that he poses around ethics and morality when it comes to technological advancement are certainly thought-provoking. As the global network of connectivity tightens, technology continues to develop at warp speed. Yet as humans, we are simply incapable of developing at such a pace. No matter how many people we befriend with on social networks, our social tribe (our hardwired connectivity, if you like) still maxes out at 150 people each.

But like it or not, we can’t stop the exponential development in technology that comes with each passing day, as underlying themes start to take hold and amplify each other’s effects – think of the internet, social media, mobile, cloud computing, big data, 3D printing, renewable energy, the Internet of Things, cognitive systems. robotics. the Smart Grid, the connected car, Smart Homes, Next Generation Eduction, Smart Cities, Next Generation Automation, Connected Healthcare, the Sharing Economy, Autonomous Vehicles, the Maker Economy, the Energy Internet and the Logistics Internet.

Now, one question is becoming increasingly important. Every single advance relies on data to progress – and, increasingly, this data will come from you. Do you genuinely believe that you have the necessary power to decide whether or not this information can be used by Google, Uber or anyone else? If the News Feed that Facebook shows to its 1.2 billion users worldwide daily is indeed being controlled by only 15 people (plus a hugely complex algorithm), shouldn’t you at least have the right to investigate the ethics of these individuals? It’s a question that’s already been asked earlier this year and resulted in Facebook apologising for the way in which they carried out psychological experiments on 700,000 users’s News Feeds back in 2012.

As Gerd rightly points out, privacy holds the key to this discussion. But as another 3 billion people come online in the near future, what will the end result be if we have no moral compass which we can use to guide us when it comes to looking at these issues?

The bottom line is this: technology does not have ethics. It is simply a platform that is in a continual state of development. Almost everything that can be digitised or automated will be. This means that the potential for technology to improve our lives is incredible. But if, in short, technology is all about improved efficiency, what on earth is going to happen to the very human characteristics that are not like those of a machine?

Much of the beauty of human creation can be attributed to mistakes, serendipity and chance. Even the simple existence of inefficiency in the form of playfulness has resulted in much of what we know to be human (take art and music as an example). So when asked (if indeed we are given the choice) how far we wish to integrate computing into our bodies in the years to come, do we just passively agree? Or to put it another way, in search of that added efficiency, are we simply going to make our heads more like machines so that machines can more easily read our heads?

The ethics surrounding this digital transformation are fascinating and slightly scary. It’s an easy call to make to let nanobots get injected into your bloodstream to destroy the cholestrol that might otherwise kill you. I suspect that we’ll generally be willing to accept such actions if the direct result of such ‘cooperation’ results in a greater life expectancy. But where do you draw that line? Is choosing an implant in your brain that can instantly access the Knowledge Graph to give you a headstart in your career prospects just as easy a decision to make? How about if those implants becomes standard and an expected requirement before you can apply for a particular job?

Interestingly, the World Future Society has laid down three simple rules when it comes to thinking about AI:-

  1. Humans should not ‘become’ technology.
  2. Humans should not be subject to dominant control by AI entities.
  3. Humans should not fabricate new creatures by augmenting humans or animals.

I agree with Gerd’s point that we can now safely say that the power of our technology has already surpassed the scope of our ethics. We may be heading into a world of abundance, as Peter Diamandis argues, but we are yet to discover the most ethical way of developing such increased efficiencies – in terms of fully representing the truth that results, being fair to all parties and acting for the benefit of all.

I’m a huge proponent of technology. Always have been and always will be. But I think we can do far worse than to take onboard the punchline of the cartoon used in Gerd’s talk:-

“In the end, remember – we weren’t downloaded – we were born”