The SAFE Network in The Guardian

Great piece in The Guardian newspaper on 1st February all about the SAFE Network and our work at MaidSafe in building a new decentralised, autonomous data and communications network for the world. John Harris really dug into the subject and it’s a well-considered piece.

It’s been fantastic to see the response across the growing SAFE community to this kind of coverage and chat with so many new people over the last week as a result. If this is the first time you’ve come across the SAFE Network and what it represents, I’d urge you to sign up to the Forum (https://safenetforum.org/). Then download the Alpha 2 software at https://maidsafe.net/ to take a look at what the start of a new internet, with privacy embedded by default, is going to look like.

It’s going to be quite a journey.

MaidSafe and the SAFE Network in The Guardian

 

Joining The Team At MaidSafe

(This is a repost from the MaidSafe blog)

As we hurtle towards the end of 2017, it’s time to take stock. And the verdict’s in: it’s been a crazy year in the world of cryptocurrency. But thankfully, in most cases, that’s crazy-good, as opposed to crazy-bad. That’s certainly the case for me personally at least. And this is why…

Back in January 2014, I organised the first Bitcoin Meetup in Scotland. As I wrote at the time, it felt like a bit of a leap of faith. Not in terms of the organisation (thanks to Meetup). But because the prevailing view amongst those few who’d actually heard of this ‘magic internet money’ was that the whole thing was a scam and destined to end in tears.

Whether real or perceived, it crossed my mind that there might be a reputational risk in becoming so deeply involved as an organiser. I don’t consider myself risk-averse in any way. But as someone who had enjoyed/endured a legal career of more than a decade, I’m hardly the best person to judge. After all, the risk of loss-aversion has well-known effects on decision-making.

But try as I might, I couldn’t get past one simple fact. I’d spent many months by that stage falling deeper down the proverbial Bitcoin rabbit hole. Late nights wrestling with explanations about the technology, engaging with the economic implications, debating the future potential and limitations. To me, it was clear that change – at a fundamental, disruptive level that would resonate across multiple areas of everyday life – was coming. And yet, as far as I could make out, no-one in Scotland had got together in a room  to discuss what was going on. The decision was made. I might be left sitting alone in that pub one evening – but surely there had to be others out there.

The story of how the scene in Scotland developed after that first meetup (for which, to be clear, I claim no credit!) is an interesting one. But it’s not the focus here. Nor is the purpose of this post a chance for me to say ‘I told you so’ when we look at Bitcoin in 2017. I believe Bitcoin remains a technology in evolution with an indeterminate end state that has plenty of room left to run. The key thing here is the paradigm shift that’s taking place.

But that very first night in Edinburgh was important for another reason. I’m still in contact with many of the people that I met for the first time that night. But undoubtedly one of the most impactful conversations I’ve had was with someone who’d been one of the first to sign up for that meetup – a guy called David Irvine, who travelled all the way across from the West Coast of Scotland, from an outfit that went by the name of MaidSafe.

I’d tried to research everyone who’d signed up before the meetup. Not in a creepy ‘let’s-track-you’ kind of way. But in a ‘let’s-build-the-community’ kind of way. I wanted to help people to keep the conversations going after the event. And I have to admit, my feeble brain had struggled to understand what MaidSafe did before the Meetup. But that changed when I spoke to David on that evening. And I was dumbfounded by the fact that a project with such huge ambitions and such far-reaching implications was taking place pretty much under my nose in Scotland.

Since that time, I’ve been heavily involved in the Bitcoin/blockchain scene, particularly in Scotland. But I’ve always been convinced that something big was happening in the mythical shed in Troon. Throughout my travels, I kept pointing people in the direction of the SAFE Network and discussing what it represents. That included asking Nick (Lambert, COO) to give a talk when I put on the Scottish Bitcoin Conference in 2014, running a Maidsafe-focused meetup and also sharing in the rollercoaster excitement of the MaidSafe fundraising in April 2014.

Fast forward four years and I’m delighted to say that I’ve now joined MaidSafe full-time as Marketing and Outreach Coordinator. Most people who start at a new company talk platitudes about their new employers. But you’ll have to take my word for it in this case. I’d continue to sing the praises of the SAFE Network even if I wasn’t working here.

This is why.

MaidSafe’s mission is no less significant than building a new secure network that will revolutionise the way that every one of us uses the internet. Many years ago, David had worked out that we collectively needed a better solution. And MaidSafe is in good company, with none other than the inventor of the web, Tim Berners-Lee, sharing similar concerns. In fact, Tim is working on addressing the same sort of issues with his Solid project at MIT.

Over the past couple of years, the problems of data storage and security have only worsened. The concerns so presciently raised by MaidSafe eleven years ago have intensified in the collective awareness of society. We now see daily examples of sensitive personal information and data being hacked or misplaced by third parties. Arguments over privacy and net neutrality dominate the news. And new concerns over the excessive power wielded by giant internet companies are raised daily.

In short, as the internet has increased in importance to our daily lives, so has the visibility of its major flaws. And crucially – these aren’t issues that will simply solve themselves. We can’t sit back and expect things to improve. Technologies such as Bitcoin and Ethereum have helped to bring the benefits of decentralisation to the forefront of discussion. And even amongst those who remain cynical, few still believe our current architecture remains fit-for-purpose when it comes to the next few decades of human evolution.

In addition to playing a small part in helping to build a solution to a problem that increases with each passing day, there’s another big motivating factor at play for me here. With the emergence of MaidSafe so early in the chronology of recent events, I believe that many over the past few years have simply not had the opportunity to spend  the time to find out what the ultimate success of this project represents. I’ve been a member of MaidSafe’s forum (https://safenetforum.org/) since it was set up (not by the company but by enthusiasts around the world, it should be noted) a few years ago – and I’m constantly bowled over by just how engaged, respectful, intelligent and enthusiastic this community is.

Over the past few years, I’ve given many talks on Bitcoin and the blockchain scene in general. But the reality is that my advocacy has always been a response to the level of community engagement out there. The more people that found out about the subject, the keener they were to explore further. The similarities to me are striking. Today, I don’t think most people are aware that the SAFE Network project has been active for eleven years. Just let that sink in for a moment. Pre-Bitcoin. The project even had a prototype crypto-currency before Satoshi’s White Paper. As I said at the start, in the context of 2017, the SAFE Network is so far from being a hyped product it’s not funny. But it’s clear to me what the SAFE Network is: an open-source project that’s open to all that invokes a passion and belief in a community who are all driving in the same direction.

Remind you of something?

As I start working with the team on a unique project, I can’t wait to get out and do my bit. I remember a comment David made years ago. It was along the lines of “It doesn’t matter who achieves our goal in the end – but it does matter that someone does”. Joining a team that have been toiling away at some of the hardest technical challenges out there for over a decade – for the most part entirely unheralded and under the radar – there’s no doubt in my mind that that’s going to change soon. And I can’t wait to get started.

If you want to get in touch and have a chat, please reach out. I’m pretty active on Twitter (@dugcampbell) or you can sign up and speak to thousands more via the forum (https://safenetforum.org/). In the meantime, we’re looking for some more people to join us at MaidSafe – so if you’re a UX/UI Designer, Software Support Analyst or Testing & Release Manager and fancy joining the team, please get in touch!

Tweetstorms

Edit: two hours after publishing this, Twitter announced tweetstorm functionality (‘Threads’). Good to see they’re reading the blog… 😉

I wasn’t a fan of Twitter’s recent decision to raise the limits in tweets to 280 characters from the tried and trusted 140. And a cursory review of the various  online comments shows that I was far from alone. But today I’m happy to say I was wrong – but not in the way I was expecting

Sure, it’s easier to write tweets these days with the extra room, particularly if you’re quoting from an article in the tweet. That’s not always a good thing. But I think one of the real benefits is actually being seen in some of the more reasoned, multi-part tweets that we’re seeing these days.

Tweetstorms have always split opinions. I remember being asked to take part by CNN in a debate on the future of Bitcoin a few years back and rubbing someone up the wrong way with my multiple answers (“Does he not know how to use Twitter?”). Trouble was, I just had too much to say on the topic. Nothing changed there I guess…

Still, that’s about the only time I’ve done it. The first time I ever heard of them as a defined concept I think was (like many things) back on Fred Wilson’s blog. But the interesting thing to me is that, somewhat counterintuitively, the value of the best tweetstorms to me has increased in line with the available characters.

Now, Twitter is hugely subjective. Maybe I’m just following folk who are good at it (or good at sharing those who are). But it feels like an unexpected step forwards in the value of Twitter for me.

And on that basis, here’s my favourite of the year so far. There’s so much wisdom in this one collection of tweets that I don’t know where to start – other than to say: read it. For non-Twitter users, click on the tweet below to read all 25 connected parts.

There are many more but other notable mentions include Marc Andreessen (pre-280, who actually coined the phrase ‘tweetstorm’) and Taylor Pearson.

I’m intrigued to see that Twitter is actively looking into how to make these kind of tweetstorms far easier. If it helps unique and/or eloquent thinkers to easily share information in a way that rewards quality, I for one will be in favour.

Look No Hands! 

One story that caught my eye today was the Tesla that managed to predict a car crash ahead and react before a human could have responded. 

The Autopilot technology, rolled out overnight to all Tesla cars by way of an software update, includes a radar processing capability – in effect, the ability for your car to see ahead of the car directly in front of you. 

There have been a few stories about the value of this newfound driving superpower kicking around but today’s story comes with a video of the incident which demonstrates precisely how powerful this technology could be in helping to avoid accidents


I drove a fair distance today, the last hour or so of which was in the worst fog I’ve seen for many years. Whilst nothing’s going to be perfect,  I would 100% have preferred to have been driving a Tesla (obviously…). But what’s really interesting here is the potential for so-called ‘fleet learning‘ – each car uploading data from its daily experiences to a central database, with this improved collective knowledge then being recycled for ongoing use by the same vehicles. 

A Safety Skynet anyone? 

Where Do We Go From Here?

The recent win by Google’s AlphaGo computer program in a 5-game Go tournament against the world’s top player for over a decade, Lee Sedol made headlines around the world.

And once you look past some of the more superficial tabloid predictions of imminent robot enslavement, you’ll find a number of intelligent and fascinating accounts detailing exactly why the event represents something of a technology landmark.

It’s worth digging into Google’s blog post for the background. Because this was not just another case of a computer learning how to win a board game. Nor was it a resumption of competition between man and machine following our previous defeats in chess (against Kasparov) and in Jeopardy (by Watson).

Complex Choices

Instead, the choice of game here is significant. Go is an ancient game with more possible legal board positions than there are number of atoms in the universe. In fact, we’ve only managed to calculate that number in 2016 after some 2,500 years. Why is this important? Because it means that a computer cannot possibly find the best options simply by brute-force guessing combinations. Building a system to index all possible moves in the game and then rely on the computer to look up the best move each time is simply not possible.

Instead, a successful Go player needs to use something that we can best understand as intuition. A human has to be able to act on no more than a feeling that one move is better than another – something that it was generally accepted that this was something that computers couldn’t do.

Turns out general opinion was wrong.

Self-Taught

By ‘simply’ learning 30 million possible moves played by human experts, the program showed that it could predict which move a human would make 57% of the time. But this would only go so far. To win, the AlphaGo algorithm needed to learn new strategies – by itself.

And it’s here that the outcome was stunning. During the games (live streamed online to massive audiences), the computer made certain moves that made no sense to Go experts. And yet (for the most part) they worked. As one commentator mentioned, this was, at some level, an alien intelligence learning to play the game by itself. And as another put it:

“..as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.”

When it comes to AI, it’s particularly important to reign in the hyperbole. Playing Go in a way that’s unrecognisable to humans at times is hardly Skynet. But it’s fascinating to think that the program reached a level of expertise that surpassed the best human player in a way that no one really fully understands. You can’t point to where it’s better because the program teaches itself to improve incrementally as a consequence of billions of tiny adjustments made automatically.

Neural Networks: Patience Pays Off

The success of computer over man came from a combination of different, but complementary, forms of AI – not least of which were Neural Networks. After reading a little about the godfather of Deep LearningGeoff Hinton, and listening to an another excellent podcast from Andressen Horowitz, it turns out that the approach of using Neural Networks (at the heart of AlphaGo) was an A.I. method that was ridiculed as a failure for a number of years by fellow scientists, particularly in the 1980’s.

It Turns out that the concept was just been too far ahead of its time. As Chris Dixon points out in ‘What’s Next In Computing?‘, every significant new technology has a gestation period. But that often doesn’t sit easy when the hype cycle is pointing towards success being just around the corner. And as the bubble bursts, the impact of the delays on the progress of innovation are usually negative.

Nowhere has that been seen so clearly as within the field of Artificial Intelligence. Indeed, the promise has exceeded the reality so often that it has its own phrase in the industry – AI Winters – where both funding and interest fall off a cliff. Turns out that some complex things are, well, complex (as well as highly dependent on other pieces of the ecosystem to fall into place). So in the UK, the Lighthill Report in 1974 criticised the utter failure of AI to achieve its grandiose objectives, leading to university funding being slashed and restricting work to a few key centres (including my home city, Edinburgh).

Expert Systems: Data Triumphs

Thankfully, the work did continue with a few believers such as Hinton however. And whilst the evolution of AI research and progress is far outside this blog post, it’s interesting to see how things evolved. At one stage, Expert Systems were seen as the future (check out this talk by Richard Susskind for how this applied in the context of legal systems).

To simplify, this is a method by which you find a highly knowledgeable human in a specific field, ask them as many questions as possible, compile the answers into a decision tree and then hope that the computer is able to generate a similar result to that expert when you ask it a question. Only problem is that it turns out that this doesn’t really work too well in practice.

But thankfully, those other missing pieces of the ecosystem are now falling into place. With massive computation, bandwith and memory available at extremely low cost these days, those barriers have now fallen. Which has led to the evolution of Neural Networks from a theoretical, heavily criticised approach into something altogether far more respected and valuable.

Welcome to self-learning algorithms – algorithms that (in this case) teach themselves how to play Go better – but without asking a Go expert.

Neural Networks aren’t new in any way. They started as a mathematical theory of the brain but didn’t make much progress for 40 years. But with the barriers gone, we’re now seeing neural networks being piled on top of each other. And AI is improving significantly not because the algorithms themselves are getting better. It’s improving because we’re now able to push increasing volumes of data into models which can in turn use this data to build out a better model of what the answer should be.

Learning By Intuition & Iteration

Instead of trying to capture and codify all existing knowledge, deep learning techniques are using data to create better results. It’s an approach that is scary to some people because it’s inherently un-debuggable. If you get the wrong result, you can’t simply check out each entry in a decision tree and fix the one that’s wrong.

But it’s got legs, particularly in the development of self-driving cars. So we don’t need to paint roads with special paint and maintain a huge global database of all roads and cars. Instead self-driving cars are going to use a collection of these machine learning techniques and algorithms in order to make the best guesses about how to drive each and every day.

Learn, iterate and improve. Scary? It shouldn’t be – because that’s exactly what we do as humans.

It’s a huge and fascinating field but the AlphaGo victory feels like an important bridge has been crossed, an inflection point when popular awareness coincided with a genuine step forward in the possibilities that the technology affords.

And of course, Google’s ultimate goal has never been to simply be better at winning games. Unless you define a game as being a challenge that is extremely difficult to beat. If so, then bring on the games – disease analysis, climate change modelling, the list is endless. When it comes to these contests, we might not expect them to be streamed live online. But as they increasingly become games that we have no option but to win, I’m pretty certain that the interest will be there.

The Listening Television

I finally took the plunge and bought a new TV today. Using a mass of Nectar points accumulated from food shopping over the past decade or so, I managed to get a good deal on a new low-end model. To be honest, I rarely watch the TV. Any viewing that I do have time for inevitably tends to be on the laptop these days. But the difference between the two models, old and new, is pretty significant. If nothing else, I have no idea how to get the old TV down the stairs – it’s that heavy.

But the whole experience of buying a new piece of tech – as exciting as that invariably is for anyone with geek-tendencies – was tempered by the story in the back of my mind about the recent Samsung Smart TV. These once-simple appliances have become completely different propositions these days, as Michael Price in Salon pointed out late last year (‘I’m terrified of my new TV: Why I’m scared to turn this thing on – and you’d be too‘).

We’re suddenly in a world where so-called Smart TV’s record our activities and choices, retaining the power to send such information on to marketers and other third parties to do as they wish. The decision to be made by many consumers is in many ways an unfair one: disable many of your all-singing all-dancing new TV’s features or accept one further encroachment into your privacy.

As you might remember, the worrying issue with the Samsung Smart TV was the fact that it had voice recognition. Or, more accurately, because of the voice recognition features that it employs, the Privacy Policy for the TV shows that in fact anything you say in the vicinity of the television may in fact be recorded and transmitted to a third party for analysis. When that’s a marketing company, it’s little more than irritating perhaps. But there’s no guarantee that the data exchange stops there.

One of the biggest issues is the fact that Samsung is sending the customer’s voice searches and data in an unencrypted format. Think of the potential for hackers and snoopers to literally listen in.

Yeah, it was a lot simpler the first time I bought my TV. Even if it weighs about the same as my fridge and is almost as attractive…..

Skyscanner Becomes A $1 Billion Business

It’s Friday so it’s as good a time as any to have some good news.

Scotland now has its first $1 billion internet business in the form of Skyscanner. I remember being at a startup drinks event in Edinburgh around five or so years ago and hearing Gareth talking about his vision to achieve exactly this goal. And now it’s reality. It’s been incredible (and hugely inspiring) to see the growth of the business in the intervening years and a real testament to both the leadership and the vision within the business.

OK, this comes with the obvious caveat in that I worked with Skyscanner for a while so I may be slightly biased. But the reality is that what Gareth – and so many more – of them have achieved collectively is absolutely phenomenal. I would be dishing out the same praise whether I knew the team or not. But having seen the inside of the business only reinforces my belief that there is something very special going on within the business away from public perception of ‘simply’ being a travel aggregation site (I’m not the only one to have seen this by any means). I look forward to watching them continuing to grow.

I’ve written before about why Scotland’s such a great place to build a technology company. Edinburgh in particular leads the way, with a rich ecosystem of startups, Codebase (the UK’s largest tech incubator), the next edition of Silicon Milkroundabout landing next weekend, the Startedin group….the list goes on.

Skyscanner might be the first $1 billion internet business, it’s true. But now it’s time to build a few more.

 

A Decade Of Blogging

I’ve always been fascinated by blogging, certainly since it really broke into the consciousness of the general public around a decade ago. Regardless of the quality of the content, the ability to actively share content directly with an audience, no matter how niche it might be, immediately hit me as being incredibly powerful.

No gatekeepers.

I’ve learned a huge amount over the last decade or so from simply reading blogs. I remember once asking work colleagues how many blogs they read regularly. Or even irregularly. The answer, it transpired, was that there wasn’t a single person who was. That still amazes me. Needless to say, I also understood that I was in the wrong job.

Of course the landscape has shifted hugely over the last decade. Some bloggers, real and anonymous, have moved on of course but many stalwarts remain (for example, Fred Wilson started blogging back in September 2003). Larger numbers of people are now producing content which, thanks to technology that’s freely available, has at least the potential of reaching a global audience. And of course the emergence of micro-blogging platforms such as Twitter really helped to tap into that pent-up desire that so many had to share something (with 288 million active monthly users generating 500 million Tweets per day currently).

However, a huge factor in the growth of blogging was the emergence of WordPress. Whilst investigating why Wordpress have withdrawn support for Bitcoin payments this week, I came across this article from October 2004 talking about the early days when Matt Mullenwegg developed WordPress, the juggernaut that is currently the most popular blogging system in use on the web, powering more than 60 million websites.

The philosophy’s really interesting here and really validates the open source model. Almost everything on WordPress.com is free. They charge for upgrades (whether it’s spam filters or custom domains) but the core proposition is – and always will be – free. If you’re worried about giving something away for free, I suggest you go and have a chat with Matt. I’m sure giving stuff away has done him much harm over the last decade or so.

Going back to the article, there seem to be some parallels between WordPress in 2004 and the state of Bitcoin in 2015. You can sense a seismic change coming. It’s impossible to say when or where the ultimate winners will be so far.  But it’s certain that there will be winners. As Scott Maxwell mentioned in the Q&A after the Bitcoin talk we gave up to Dundee Tech Meetup yesterday, there’s probably 5 or 6 places lying vacant at the moment just waiting for people to carve themselves a place in the history books. With every day, we get a little closer to the time when we find out who it’s going to be.

The Evolution of Spending in the Sharing Economy

Change is a constant and it’s clear that the growth in the collaborative economy is going to reshape current spending patterns throughout many economies.

The actual impact is still hard to ascertain. But the evidence is stacking up that there are going to be significant changes in the near future. As Larry Fink pointed out in a recent article, the impact of technology can profoundly affect an entire industry, even if it only directly impacts initially on a small subsection.

Fink uses the example of hydraulic fracturing in oil production to make his point. As the demand for the supply of oil has continued to rise by around 600,000 barrels a day over the past year, the actual supply – in part due to new technologies such as fracking (putting to one side for this article the immense damage that fracking causes) – has increased by around 2 million barrels a day.

His argument here is that (as damaging as fracking is) the technology has affected the overall price per barrel in despite the fact that the majority of barrels are not produced using this method.

So when it comes to the sharing economy, what sort of changes are we likely to see as a result of the stellar growth of such businesses as Uber and Airbnb? For most younger people in the Western Economy, there are two common twin goals when it comes to acquiring significant items of property: the car and the home. Not surprisingly, these are in the crosshairs of both growing businesses.

So whilst both assets are fundamentally different (one being an investment, the other a depreciating asset), the question still remains. If significant sums of money are less likely in the future to be tied up by these big capital outlays at the start of young people’s lives, where will they be directed instead? Any ideas?

Respond to the Scottish Identity Database Consultation Today

tl;dr Go here, download the Respondent Information Form and submit this before Wednesday 25th February to say that the proposals require primary legislation and should only be put forward after full public debate has taken place around the issues given the fact that the proposals will fundamentally restructure the relationship of citizen to state.

It’s rare that I write something on my blog and ask people to act. But tonight is one of those exceptions.

A national identity card?

For many years, the concept of a national identity card has been put forward by various political parties around the UK. However, each time the topic has proved to be political suicide. Proposals have proved to be unpopular and consistently rejected by the electorate. Increasingly, as more people interact online, it’s become obvious that the risks of building up such a valuable store of information greatly exceed the potential benefits that any such scheme can deliver.

And yet, despite the general resistance to the concept of an identity scheme across the UK over the years, here in Scotland we face the very real risk that minor legislation that has been proposed to extend the functionality of NHS records will, in effect, have exactly the same effect by creating a national identity database.

I spent the evening tonight at an event organised by the Open Rights Group in Scotland who have taken on the important role of coordinating attempts to raise awareness and resistance to this legislation being enacted without appropriate levels of debate. The proposals come in the form of secondary legislation with a consultation period currently running under the slightly innocuous title of the Consultation on proposed amendments to the National Health Service Central Register (Scotland) Regulations 2006.

Legislation that has an impact way beyond your medical records

Before you go any further, I suggest you read ORG’s detailed response to the Consultation. The crux of the matter is this: if you live in Scotland, the chances are that the NHS already holds a record of the fact that you exist. But the problem is that this new legislation would enable the reference number that uniquely identifies you as an individual to be shared freely with another 100-plus Scottish agencies.

Why is this a big deal? The practical reality of the proposal as drafted is that it would create a Scottish identity database. We face a very real possibility that public bodies could then start to mine such data in order to build their intelligence about you in pursuit of ends that may directly conflict your own.

So, to use a simplistic example, seeing your choice of library books used against you when it comes to claiming unemployment benefit (too much fiction, not enough textbooks?) becomes a very real possibility. Or how about the fact that most people who undergo some form of addiction counselling would normally want that information to be restricted rather than being shared widely amongst thousands of employees across different organisations. And it’s not difficult to envisage a situation whereby a victim of domestic violence learns of the increased transparency about her personal details and therefore attempts to remain outside the health system with issues unreported in order to prevent an abusive ex-partner who works for a public body from tracking her down.

The proposed model does of course brings with it certain efficiencies. But the reality is that the risks of potential misuse arising from the collection of such information are huge. By creating a comprehensive list of personal identifiers, we create an environment within which the temptation to use such a treasure trove of information for irrelevant or minor uses will inevitably grow over time.

I’m not going to write more about the privacy debate here. There are plenty of well-rehearsed arguments from plenty of people who are far more eloquent than I can be who have written fantastic pieces detailing the risks of implementing similar systems over the years (I recommend reading Wendy Grossman’s excellent SCRIPT essay on identity cards from 2005). But I did want to point out the following:

The massive risk of centralisation

If there’s one thing I’ve learned from my time spent with decentralised systems around Bitcoin and the blockchain it’s this – design a system to protect value by putting everything within centralised locations and restricting access and you inevitably end up with a system that will always – always – act as a red flag to hackers.

The more valuable that data (whether it’s money or personal information), the greater the incentive to attack it once it’s stored in one location. We’re not there yet but blockchain technologies will solve this problem ultimately I’m convinced.

So we have a database – now what?

The question here isn’t necessarily whether or not we trust our public bodies to use such collected information for good. The question is whether we trust their defences to be 100% secure from any breaches (either internal or external). To save you the effort, I’ll answer that now. No, we can’t.

Whether we believe the future intentions of governments to be noble or not, the problem is that once such information has been handily compiled into a database, it cannot be somehow decompile so it will remain permanently at risk of being accessed by others. If you need an example, consider the fact that centralised security didn’t turn out so well for those world-leading experts in cyber-security the NSA did it?

Is the technology up to scratch?

The general consensus is that the technology systems utilised by the public sector in Scotland are lagging behind those in use down south. Not a good foundation to use for the storage of the crown jewels, as it were. If the NSA weren’t able to protect their own confidential data, I’m not convinced that the powers-that-be at Holyrood will be able to deliver a system that’s more successful in some way.

Have certain politicians changed their minds?

ID cards were rejected by many different politicians when the last serious attempt was made to introduce them a few years ago. That includes the SNP who are currently backing this legislation. Back in 2005, the Scottish Government actually published a paper on Identity Management and Privacy Principles (revised in October 2014) which explicitly stated that public bodies must avoid sharing persistent identifiers when it comes to identity. Yet that is exactly what is proposed in this model. Have certain politicians forgotten their previous position on this issue? Or are people simply not talking to each other?

Respond to the Consultation

This is in no way a comprehensive post that details all the key issues. It is, however, I hope a timely one in the fact that it is important for as many people as possible to both learn about the proposals and the fact that the Consultation itself closes in under a week. Regardless of your views – pro or anti – this is not by any stretch of the imagination legislation that should go through a democratic system without a wider public debate being held. It has the potential to fundamentally redraw the boundaries of citizenship within society and it needs more people to become engaged. This is not simply a Scottish debate. It’s inconceivable that if such a system is introduced in this country that it will somehow not be adopted south of the border at some point down the line.

Please do. You can respond to the consultation here.