Look No Hands! 

One story that caught my eye today was the Tesla that managed to predict a car crash ahead and react before a human could have responded. 

The Autopilot technology, rolled out overnight to all Tesla cars by way of an software update, includes a radar processing capability – in effect, the ability for your car to see ahead of the car directly in front of you. 

There have been a few stories about the value of this newfound driving superpower kicking around but today’s story comes with a video of the incident which demonstrates precisely how powerful this technology could be in helping to avoid accidents


I drove a fair distance today, the last hour or so of which was in the worst fog I’ve seen for many years. Whilst nothing’s going to be perfect,  I would 100% have preferred to have been driving a Tesla (obviously…). But what’s really interesting here is the potential for so-called ‘fleet learning‘ – each car uploading data from its daily experiences to a central database, with this improved collective knowledge then being recycled for ongoing use by the same vehicles. 

A Safety Skynet anyone? 

Where Do We Go From Here?

The recent win by Google’s AlphaGo computer program in a 5-game Go tournament against the world’s top player for over a decade, Lee Sedol made headlines around the world.

And once you look past some of the more superficial tabloid predictions of imminent robot enslavement, you’ll find a number of intelligent and fascinating accounts detailing exactly why the event represents something of a technology landmark.

It’s worth digging into Google’s blog post for the background. Because this was not just another case of a computer learning how to win a board game. Nor was it a resumption of competition between man and machine following our previous defeats in chess (against Kasparov) and in Jeopardy (by Watson).

Complex Choices

Instead, the choice of game here is significant. Go is an ancient game with more possible legal board positions than there are number of atoms in the universe. In fact, we’ve only managed to calculate that number in 2016 after some 2,500 years. Why is this important? Because it means that a computer cannot possibly find the best options simply by brute-force guessing combinations. Building a system to index all possible moves in the game and then rely on the computer to look up the best move each time is simply not possible.

Instead, a successful Go player needs to use something that we can best understand as intuition. A human has to be able to act on no more than a feeling that one move is better than another – something that it was generally accepted that this was something that computers couldn’t do.

Turns out general opinion was wrong.

Self-Taught

By ‘simply’ learning 30 million possible moves played by human experts, the program showed that it could predict which move a human would make 57% of the time. But this would only go so far. To win, the AlphaGo algorithm needed to learn new strategies – by itself.

And it’s here that the outcome was stunning. During the games (live streamed online to massive audiences), the computer made certain moves that made no sense to Go experts. And yet (for the most part) they worked. As one commentator mentioned, this was, at some level, an alien intelligence learning to play the game by itself. And as another put it:

“..as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”

When it comes to AI, it’s particularly important to reign in the hyperbole. Playing Go in a way that’s unrecognisable to humans at times is hardly Skynet. But it’s fascinating to think that the program reached a level of expertise that surpassed the best human player in a way that no one really fully understands. You can’t point to where it’s better because the program teaches itself to improve incrementally as a consequence of billions of tiny adjustments made automatically.

Neural Networks: Patience Pays Off

The success of computer over man came from a combination of different, but complementary, forms of AI – not least of which were Neural Networks. After reading a little about the godfather of Deep LearningGeoff Hinton, and listening to an another excellent podcast from Andressen Horowitz, it turns out that the approach of using Neural Networks (at the heart of AlphaGo) was an A.I. method that was ridiculed as a failure for a number of years by fellow scientists, particularly in the 1980’s.

It Turns out that the concept was just been too far ahead of its time. As Chris Dixon points out in ‘What’s Next In Computing?‘, every significant new technology has a gestation period. But that often doesn’t sit easy when the hype cycle is pointing towards success being just around the corner. And as the bubble bursts, the impact of the delays on the progress of innovation are usually negative.

Nowhere has that been seen so clearly as within the field of Artificial Intelligence. Indeed, the promise has exceeded the reality so often that it has its own phrase in the industry – AI Winters – where both funding and interest fall off a cliff. Turns out that some complex things are, well, complex (as well as highly dependent on other pieces of the ecosystem to fall into place). So in the UK, the Lighthill Report in 1974 criticised the utter failure of AI to achieve its grandiose objectives, leading to university funding being slashed and restricting work to a few key centres (including my home city, Edinburgh).

Expert Systems: Data Triumphs

Thankfully, the work did continue with a few believers such as Hinton however. And whilst the evolution of AI research and progress is far outside this blog post, it’s interesting to see how things evolved. At one stage, Expert Systems were seen as the future (check out this talk by Richard Susskind for how this applied in the context of legal systems).

To simplify, this is a method by which you find a highly knowledgeable human in a specific field, ask them as many questions as possible, compile the answers into a decision tree and then hope that the computer is able to generate a similar result to that expert when you ask it a question. Only problem is that it turns out that this doesn’t really work too well in practice.

But thankfully, those other missing pieces of the ecosystem are now falling into place. With massive computation, bandwith and memory available at extremely low cost these days, those barriers have now fallen. Which has led to the evolution of Neural Networks from a theoretical, heavily criticised approach into something altogether far more respected and valuable.

Welcome to self-learning algorithms – algorithms that (in this case) teach themselves how to play Go better – but without asking a Go expert.

Neural Networks aren’t new in any way. They started as a mathematical theory of the brain but didn’t make much progress for 40 years. But with the barriers gone, we’re now seeing neural networks being piled on top of each other. And AI is improving significantly not because the algorithms themselves are getting better. It’s improving because we’re now able to push increasing volumes of data into models which can in turn use this data to build out a better model of what the answer should be.

Learning By Intuition & Iteration

Instead of trying to capture and codify all existing knowledge, deep learning techniques are using data to create better results. It’s an approach that is scary to some people because it’s inherently un-debuggable. If you get the wrong result, you can’t simply check out each entry in a decision tree and fix the one that’s wrong.

But it’s got legs, particularly in the development of self-driving cars. So we don’t need to paint roads with special paint and maintain a huge global database of all roads and cars. Instead self-driving cars are going to use a collection of these machine learning techniques and algorithms in order to make the best guesses about how to drive each and every day.

Learn, iterate and improve. Scary? It shouldn’t be – because that’s exactly what we do as humans.

It’s a huge and fascinating field but the AlphaGo victory feels like an important bridge has been crossed, an inflection point when popular awareness coincided with a genuine step forward in the possibilities that the technology affords.

And of course, Google’s ultimate goal has never been to simply be better at winning games. Unless you define a game as being a challenge that is extremely difficult to beat. If so, then bring on the games – disease analysis, climate change modelling, the list is endless. When it comes to these contests, we might not expect them to be streamed live online. But as they increasingly become games that we have no option but to win, I’m pretty certain that the interest will be there.

The Listening Television

I finally took the plunge and bought a new TV today. Using a mass of Nectar points accumulated from food shopping over the past decade or so, I managed to get a good deal on a new low-end model. To be honest, I rarely watch the TV. Any viewing that I do have time for inevitably tends to be on the laptop these days. But the difference between the two models, old and new, is pretty significant. If nothing else, I have no idea how to get the old TV down the stairs – it’s that heavy.

But the whole experience of buying a new piece of tech – as exciting as that invariably is for anyone with geek-tendencies – was tempered by the story in the back of my mind about the recent Samsung Smart TV. These once-simple appliances have become completely different propositions these days, as Michael Price in Salon pointed out late last year (‘I’m terrified of my new TV: Why I’m scared to turn this thing on – and you’d be too‘).

We’re suddenly in a world where so-called Smart TV’s record our activities and choices, retaining the power to send such information on to marketers and other third parties to do as they wish. The decision to be made by many consumers is in many ways an unfair one: disable many of your all-singing all-dancing new TV’s features or accept one further encroachment into your privacy.

As you might remember, the worrying issue with the Samsung Smart TV was the fact that it had voice recognition. Or, more accurately, because of the voice recognition features that it employs, the Privacy Policy for the TV shows that in fact anything you say in the vicinity of the television may in fact be recorded and transmitted to a third party for analysis. When that’s a marketing company, it’s little more than irritating perhaps. But there’s no guarantee that the data exchange stops there.

One of the biggest issues is the fact that Samsung is sending the customer’s voice searches and data in an unencrypted format. Think of the potential for hackers and snoopers to literally listen in.

Yeah, it was a lot simpler the first time I bought my TV. Even if it weighs about the same as my fridge and is almost as attractive…..

Skyscanner Becomes A $1 Billion Business

It’s Friday so it’s as good a time as any to have some good news.

Scotland now has its first $1 billion internet business in the form of Skyscanner. I remember being at a startup drinks event in Edinburgh around five or so years ago and hearing Gareth talking about his vision to achieve exactly this goal. And now it’s reality. It’s been incredible (and hugely inspiring) to see the growth of the business in the intervening years and a real testament to both the leadership and the vision within the business.

OK, this comes with the obvious caveat in that I worked with Skyscanner for a while so I may be slightly biased. But the reality is that what Gareth – and so many more – of them have achieved collectively is absolutely phenomenal. I would be dishing out the same praise whether I knew the team or not. But having seen the inside of the business only reinforces my belief that there is something very special going on within the business away from public perception of ‘simply’ being a travel aggregation site (I’m not the only one to have seen this by any means). I look forward to watching them continuing to grow.

I’ve written before about why Scotland’s such a great place to build a technology company. Edinburgh in particular leads the way, with a rich ecosystem of startups, Codebase (the UK’s largest tech incubator), the next edition of Silicon Milkroundabout landing next weekend, the Startedin group….the list goes on.

Skyscanner might be the first $1 billion internet business, it’s true. But now it’s time to build a few more.

 

A Decade Of Blogging

I’ve always been fascinated by blogging, certainly since it really broke into the consciousness of the general public around a decade ago. Regardless of the quality of the content, the ability to actively share content directly with an audience, no matter how niche it might be, immediately hit me as being incredibly powerful.

No gatekeepers.

I’ve learned a huge amount over the last decade or so from simply reading blogs. I remember once asking work colleagues how many blogs they read regularly. Or even irregularly. The answer, it transpired, was that there wasn’t a single person who was. That still amazes me. Needless to say, I also understood that I was in the wrong job.

Of course the landscape has shifted hugely over the last decade. Some bloggers, real and anonymous, have moved on of course but many stalwarts remain (for example, Fred Wilson started blogging back in September 2003). Larger numbers of people are now producing content which, thanks to technology that’s freely available, has at least the potential of reaching a global audience. And of course the emergence of micro-blogging platforms such as Twitter really helped to tap into that pent-up desire that so many had to share something (with 288 million active monthly users generating 500 million Tweets per day currently).

However, a huge factor in the growth of blogging was the emergence of WordPress. Whilst investigating why Wordpress have withdrawn support for Bitcoin payments this week, I came across this article from October 2004 talking about the early days when Matt Mullenwegg developed WordPress, the juggernaut that is currently the most popular blogging system in use on the web, powering more than 60 million websites.

The philosophy’s really interesting here and really validates the open source model. Almost everything on WordPress.com is free. They charge for upgrades (whether it’s spam filters or custom domains) but the core proposition is – and always will be – free. If you’re worried about giving something away for free, I suggest you go and have a chat with Matt. I’m sure giving stuff away has done him much harm over the last decade or so.

Going back to the article, there seem to be some parallels between WordPress in 2004 and the state of Bitcoin in 2015. You can sense a seismic change coming. It’s impossible to say when or where the ultimate winners will be so far.  But it’s certain that there will be winners. As Scott Maxwell mentioned in the Q&A after the Bitcoin talk we gave up to Dundee Tech Meetup yesterday, there’s probably 5 or 6 places lying vacant at the moment just waiting for people to carve themselves a place in the history books. With every day, we get a little closer to the time when we find out who it’s going to be.

The Evolution of Spending in the Sharing Economy

Change is a constant and it’s clear that the growth in the collaborative economy is going to reshape current spending patterns throughout many economies.

The actual impact is still hard to ascertain. But the evidence is stacking up that there are going to be significant changes in the near future. As Larry Fink pointed out in a recent article, the impact of technology can profoundly affect an entire industry, even if it only directly impacts initially on a small subsection.

Fink uses the example of hydraulic fracturing in oil production to make his point. As the demand for the supply of oil has continued to rise by around 600,000 barrels a day over the past year, the actual supply – in part due to new technologies such as fracking (putting to one side for this article the immense damage that fracking causes) – has increased by around 2 million barrels a day.

His argument here is that (as damaging as fracking is) the technology has affected the overall price per barrel in despite the fact that the majority of barrels are not produced using this method.

So when it comes to the sharing economy, what sort of changes are we likely to see as a result of the stellar growth of such businesses as Uber and Airbnb? For most younger people in the Western Economy, there are two common twin goals when it comes to acquiring significant items of property: the car and the home. Not surprisingly, these are in the crosshairs of both growing businesses.

So whilst both assets are fundamentally different (one being an investment, the other a depreciating asset), the question still remains. If significant sums of money are less likely in the future to be tied up by these big capital outlays at the start of young people’s lives, where will they be directed instead? Any ideas?

Respond to the Scottish Identity Database Consultation Today

tl;dr Go here, download the Respondent Information Form and submit this before Wednesday 25th February to say that the proposals require primary legislation and should only be put forward after full public debate has taken place around the issues given the fact that the proposals will fundamentally restructure the relationship of citizen to state.

It’s rare that I write something on my blog and ask people to act. But tonight is one of those exceptions.

A national identity card?

For many years, the concept of a national identity card has been put forward by various political parties around the UK. However, each time the topic has proved to be political suicide. Proposals have proved to be unpopular and consistently rejected by the electorate. Increasingly, as more people interact online, it’s become obvious that the risks of building up such a valuable store of information greatly exceed the potential benefits that any such scheme can deliver.

And yet, despite the general resistance to the concept of an identity scheme across the UK over the years, here in Scotland we face the very real risk that minor legislation that has been proposed to extend the functionality of NHS records will, in effect, have exactly the same effect by creating a national identity database.

I spent the evening tonight at an event organised by the Open Rights Group in Scotland who have taken on the important role of coordinating attempts to raise awareness and resistance to this legislation being enacted without appropriate levels of debate. The proposals come in the form of secondary legislation with a consultation period currently running under the slightly innocuous title of the Consultation on proposed amendments to the National Health Service Central Register (Scotland) Regulations 2006.

Legislation that has an impact way beyond your medical records

Before you go any further, I suggest you read ORG’s detailed response to the Consultation. The crux of the matter is this: if you live in Scotland, the chances are that the NHS already holds a record of the fact that you exist. But the problem is that this new legislation would enable the reference number that uniquely identifies you as an individual to be shared freely with another 100-plus Scottish agencies.

Why is this a big deal? The practical reality of the proposal as drafted is that it would create a Scottish identity database. We face a very real possibility that public bodies could then start to mine such data in order to build their intelligence about you in pursuit of ends that may directly conflict your own.

So, to use a simplistic example, seeing your choice of library books used against you when it comes to claiming unemployment benefit (too much fiction, not enough textbooks?) becomes a very real possibility. Or how about the fact that most people who undergo some form of addiction counselling would normally want that information to be restricted rather than being shared widely amongst thousands of employees across different organisations. And it’s not difficult to envisage a situation whereby a victim of domestic violence learns of the increased transparency about her personal details and therefore attempts to remain outside the health system with issues unreported in order to prevent an abusive ex-partner who works for a public body from tracking her down.

The proposed model does of course brings with it certain efficiencies. But the reality is that the risks of potential misuse arising from the collection of such information are huge. By creating a comprehensive list of personal identifiers, we create an environment within which the temptation to use such a treasure trove of information for irrelevant or minor uses will inevitably grow over time.

I’m not going to write more about the privacy debate here. There are plenty of well-rehearsed arguments from plenty of people who are far more eloquent than I can be who have written fantastic pieces detailing the risks of implementing similar systems over the years (I recommend reading Wendy Grossman’s excellent SCRIPT essay on identity cards from 2005). But I did want to point out the following:

The massive risk of centralisation

If there’s one thing I’ve learned from my time spent with decentralised systems around Bitcoin and the blockchain it’s this – design a system to protect value by putting everything within centralised locations and restricting access and you inevitably end up with a system that will always – always – act as a red flag to hackers.

The more valuable that data (whether it’s money or personal information), the greater the incentive to attack it once it’s stored in one location. We’re not there yet but blockchain technologies will solve this problem ultimately I’m convinced.

So we have a database – now what?

The question here isn’t necessarily whether or not we trust our public bodies to use such collected information for good. The question is whether we trust their defences to be 100% secure from any breaches (either internal or external). To save you the effort, I’ll answer that now. No, we can’t.

Whether we believe the future intentions of governments to be noble or not, the problem is that once such information has been handily compiled into a database, it cannot be somehow decompile so it will remain permanently at risk of being accessed by others. If you need an example, consider the fact that centralised security didn’t turn out so well for those world-leading experts in cyber-security the NSA did it?

Is the technology up to scratch?

The general consensus is that the technology systems utilised by the public sector in Scotland are lagging behind those in use down south. Not a good foundation to use for the storage of the crown jewels, as it were. If the NSA weren’t able to protect their own confidential data, I’m not convinced that the powers-that-be at Holyrood will be able to deliver a system that’s more successful in some way.

Have certain politicians changed their minds?

ID cards were rejected by many different politicians when the last serious attempt was made to introduce them a few years ago. That includes the SNP who are currently backing this legislation. Back in 2005, the Scottish Government actually published a paper on Identity Management and Privacy Principles (revised in October 2014) which explicitly stated that public bodies must avoid sharing persistent identifiers when it comes to identity. Yet that is exactly what is proposed in this model. Have certain politicians forgotten their previous position on this issue? Or are people simply not talking to each other?

Respond to the Consultation

This is in no way a comprehensive post that details all the key issues. It is, however, I hope a timely one in the fact that it is important for as many people as possible to both learn about the proposals and the fact that the Consultation itself closes in under a week. Regardless of your views – pro or anti – this is not by any stretch of the imagination legislation that should go through a democratic system without a wider public debate being held. It has the potential to fundamentally redraw the boundaries of citizenship within society and it needs more people to become engaged. This is not simply a Scottish debate. It’s inconceivable that if such a system is introduced in this country that it will somehow not be adopted south of the border at some point down the line.

Please do. You can respond to the consultation here.

Emojis Matter

Emojis and emoticons seem to have become increasingly widespread over the last few years. I have to admit – I’m not a huge fan myself. But I can 100% see why they’ve become so popular. After all, who within Bitcoin doesn’t like ToTheMoonGuy?

With access to technologies becoming increasingly commonplace (SMS, tweets, Facebook interactions), we’re communicating more frequently but using far fewer words in each exchange. And within these reduced mediums, one well-placed emoticon can easily convert a vicious personal attack into nothing more than comical banter between friends.

It’s interesting to watch how society is starting to deal with this evolution in language. Ignoring the cost implications of the technologies that have in some cases been misunderstood (a woman in Scotland racked up an extra £1,000 bill as a result of her emoticon addiction when she failed to realise that each emoticon message was being charged as a picture message by her mobile provider), they are now assuming more formal significance.

There are reports of juries being directed to focus on the use of emoticons in written evidence led in court. We saw it happen in the recent Silk Road trial of Ross W. Ulbricht for example. But the difficulty here is that there is no standardised usage yet for the symbols. Usage of emoji can vary between two individuals or within certain communities so it remains a challenge for outsiders to interpret at this stage.

I don’t really have any firm conclusions on this one way or the other to be honest. But I’m interested to see whether we will ever reach a stage where the meaning behind emoticons (or their descendents) become genuinely standardised. Or will the development follow that of the written word or currency, where to date the world has shown itself to contain enough niches to support entirely separate versions. My instinct is that we are a long way off a common language using symbols.

┗(°0°)┛

The Problems of Long-Term Digital Archiving

I imagine that backing up and protecting data is a pretty standard concern for most people these days (note: I don’t however mean that we’re all actually doing something about it however). But for anyone who has collections of family memories on either backup hard drives or such consumer-friendly cloud services like iCloud, those risks have obviously been identified already.

Yet the reality is that it’s still unlikely that the precautions that we’re taking today to preserve this data is going to be sufficient in the longer term. Whilst the first paper that we’ve discovered (from 2nd century China) has survived to this day, what are the chances of a stash of your favourite JPG’s surviving for hundreds of years? If so where and how will they be indexed?

And, even if we do manage to preserve such a collected human history, as Vint Cerf has just pointed out, there’s a very real chance that we might end up storing a vast amount of data with absolutely no idea what that data actually is. Or to put it another way, we might have created a file using Photoshop but that fact – together with the details of the software used itself – is then lost over the passage of time, rendering the data useless in the future.

There’s an interesting proposal to carry out a type of X-ray analysis – whereby a snapshot could be taken of the digital environment in which the file was created (i.e. the software, the computer model, the operating system etc) in a way that could then be easily checked far off into the future. However, the sort of business that carries out such an essential service would be one that would have to survive for hundreds of years. That’s not a sort of business that we’ve ever seen to date.

I can’t help but think that there’s a blockchain solution for this in some way.

 

How Facebook Means The Internet For More People

I’ll save my update on what has been an amazing few days being immersed in building ideas for blockchain businesses with the Chiasma until I can do it justice.

Instead, here’s a fascinating story that shows just how powerful Facebook is becoming in some developing countries. To summarise, it transpired to those studying survey responses that there are millions of Facebook users in Indonesia have no idea that they’re using the internet. Indeed, the majority of respondents in Nigeria, India, Indonesia and Brazil all agreed with the statement “Facebook is the internet“.

Of course, Facebook and others have been instrumental in building the foundations for this ubiquity, with Facebook Zero focused on providing access to those with only basic phones and the provision of Facebook-only data plans in India, for example.

But the reality is that now the platform is so important that regulators and public bodies can do nothing other than engage with citizens where they are – namely on Facebook. And whilst the gradual exodus continues from the open web to a closed proprietary platform, we continue to move further away from the original benefits – and intention – that powered the original concept of the internet.