Security, Identity And The Relentless Pursuit Of Data

Listen up...Every single day, technology and our understanding of what might be possible advances a little further. After a slow start (in retrospect), we’re now picking up pace. Soon we’ll be breaking into a jog along the road towards exponential development (see Ray Kurzweil’s ‘The Law Of Accelerating Returns‘). Exciting times indeed. Yet increasingly I’m finding myself questioning how technology will impact on issues of security and identity.

Both topics are going to be of critical importance over the next couple of decades. I can see personal identity driving the rise of individual and surge pricing for example – a move to a world where everyone is charged a different price according to past behaviours. But before we even get to that stage, a number of fundamental questions still need to be answered.

It’s pretty much accepted that advances in technology have brought massive improvements in the quality of life for many. Although not yet for all. The developing world for example is really just getting started, with booming mobile phone adoption rates connecting millions to money and knowledge in a way that was previously impossible.

But while technology creates collective benefits that are often direct (new products and services), indirect (more efficient processes) or a combination of the two, we rarely spend any real time actually debating how to deal with the digital exhaust that each of us is, knowingly or otherwise, leaving in our wake.

Now the question of who should be able to access this trail of information – and for what purpose – is one that’s causing tension between various groups. The battle lines are being drawn today whilst most people remain blissfully unaware.

Gold rush for a new era?

There are many parallels today with the way that society has developed in the past. Whereas the majority simply watched as explorers and innovators rushed to acquire newly-discovered natural resources such as oil or gold in days gone by with little or no thought for the environmental consequences, we now see the same fervour from large businesses, governments and criminals, as access to personal data increasingly becomes their lifeblood.

Yet, unlike oil, this treasure is far less transient and limited. Once data is recorded, it does not tend to simply disappear, somehow self-destructing in a James Bond style. It doesn’t matter how effective we believe current laws are. The reality is that there can be no certainty about how our personal data will be used in the future by businesses and organisations that may not yet even exist in a society whose cultural norms continue to evolve.

And whilst there could be a real benefit from our historic data being used to more accurately diagnose our medical problems in the future, for example, few people would be overjoyed about mobile phone location data being used in the same way to justify an increase in health insurance premiums because the data shows that someone was a regular visitor to a fast food outlet ten years before.

Let Battle Commence…

One of the best articles that I read last year – full stop – was by security guru Bruce Schneier. He frames the battle that’s shaping up in the digital world brilliantly and I thoroughly recommend that you make the time to go and spend ten minutes reading it in full.

Despite becoming increasingly high-tech, Schneier sees modern society reverting to the feudal system that used to be common in the past. Skirmishes are becoming more frequent between two distinct camps:

  • The Nimble: small tech-savvy groups of individuals
  • The Powerful: governments or large businesses

The nimble respond quickly to advances in technology by adopting platforms that spread their messages efficiently. Yet, whilst it might take them longer to get started, the established government or large businesses will usually end up in a far stronger position of power over the masses – simply because networks tend to amplify existing power, of which they had plenty from Day One. As Schneier writes:

“So while the Syrian dissidents used Facebook to organise, the Syrian government used Facebook to identify dissidents to arrest”

The vast majority of the world are vassals who fall into neither camp. We believe that our safest option is to ally ourselves to our feudal lords in order to gain their protection. Yet, the reality is that our so-called free choice of ruler is increasingly an illusion. Most of us are becoming increasingly reliant on platforms and devices already owned by these powerful organisations who successfully attract the average user with the lure for convenient storage of all of our personal information in one single location. Just think of Facebook and its quarter of a trillion photographs.

Decentralisation and Transparency

History shows that a true feudal relationship involves rights and obligations that run both ways. Yet events of recent years (including Wikileaks, Snowden and PRISM) have shown that it is going to be exceptionally difficult for those in power to strike a balance between such rights and responsibilities.

In fact, it’s probably an impossible task for them to gain that necessary perspective by themselves. Technology amplifies the potential damage that could be caused by one individual and therefore centralised organisations in power feel compelled to seek increasingly draconian powers to prevent such risks, however remote.

It’s a concerning trend because as society moves increasingly towards a technology-driven distributed networks of individuals, this clash of interests can only become more frequent. Whether it is covert surveillance by governments or data collection by large organisations matters little.

Will society be content to let personal data be controlled by third parties in this way? One of the reasons that Bitcoin is so powerful is precisely because of the fact that it is decentralised, with no central point that can be attacked or influenced – whereas the accumulation of data (for which read: power) in centralised organisations looks to be hugely problematic over time.

A tipping point for public interest?

But we’re in the early stages of this cycle. Over the coming years, I think that the real backlash may come when the average ‘vassal’ starts to see more everyday items that are plugged into the Internet of Things that know precisely who you are and what you’ve done in the past. Will a line be crossed when a bus shelter reminds you of that TV program you watched on your mobile a couple of weeks before, for example?

Whether it will be too late at that stage to protect personal data is up for debate. Whether the nimble or the powerful end up ruling the world is still to be decided. But wherever we’re heading, the cost of starting to work on a solution now has to be significantly cheaper than simply waiting to fix the issue in the future once the data’s been released into the wild.

photo credit: communitiesuk via cc

Three Key Megatrends In Technology (And Society)

It’s not always that easy to see what’s ahead…

If you’re interested in technology, it’s very easy to be seduced by the hype that surrounds the new, shiny product or service that everyone’s talking about that month. And whilst that’s mostly harmless for the consumer, it can be fatal for a VC. Not only are the companies that you invest in risky but by paying above the odds, you now need your winners to succeed on an even greater scale to have a chance of repaying the people who trusted you with their cash.

So I always find it interesting to hear VC’s explain how they make the decisions about what to invest in given that they focus only on sectors that they believe have tremendous growth potential. Fred Wilson is both a top VC and daily blogger who’s particularly insightful and his recent talk at Le Web on three key megatrends in technology is no exception. You can check out the full talk in the video below.


You See Better From Further Out

Fred’s approach is to move one step back from focusing on so-called hot areas in general (such as machine learning and big data) to try to understand the bigger picture. Don’t attempt to guess which technology will be the most important. Look instead at how society is developing and the gaps that are being created. And it’s on this basis that he sees three ‘mega-trends’ driving business over the next few years.

1. Transition from bureaucratic hierarchies to technology-driven networks

Business traditionally functioned from the top down. Management orders filtered down the levels whilst customer feedback would usually go directly to front-line (and often junior) staff. When the system worked, that feedback would have to travel back up through the various layers until management made the decision about whether or not to make changes. Inefficient yes but justified by the high costs of communication.

But now these costs have plummeted, traditional hierarchies are being replaced by technology-driven networks. Think about the disruption to the newspaper industry: vast newsrooms with armies of reporters directed by a publisher with stories being edited to meet deadlines before the publication of a physical daily newspaper. Cue the entry of technology-driven networks (and the advent of Twitter and blogs in particular) and now everyone can be a reporter.

The crowd on each network determines what is popular (by retweets, follower count and the like) and the news that is relevant is delivered to us instantly via our mobiles. The same disruption can be seen in film/television (YouTube) and the music industry (Soundcloud).

Consumers now have the power to clearly signal what they want and find useful. But Fred believes we’re still in the early stages of this process which is only now starting to ripple through other industries like hotels (Airbnb, OneFineStay), creative industries (Kickstarter) and learning (Codecademy). Most industries will be affected by networks over the medium term.

2. Everything is being unbundled

It used to be expensive to get products and services to market. That cost meant that businesses tended to bundle things together that the customers had to pay for, even if they didn’t necessarily want the full selection (think of the Sunday papers with News, Holidays, Finance, Fashion, Classified Ads & Sports sections). Yet technology makes it cheap for new companies to be built to deliver single parts of these products, with the result often being that the bit you actually want is now both cheaper and of a higher quality.

Banking is a great example of an industry that’s being unbundled. It used to be very expensive to open and run a physical branch so the banks offered all types of products, including mortgages, credit cards, small business loans and working capital finance. Yet new businesses are now able to use networks of individuals to provide more efficient, specialised and more effective products – through peer to peer lending for example (Lending Club).

University education is another area where the high costs of traditional delivery – sourcing a building, lecturers, expensive academic books in libraries, face-to-face lectures – are being disrupted by MOOCs and mobile online learning platforms. The network model is also changing the face of research, both with the growth of Open Access publications and by enabling people to collaborate across different locations to enable researchers to share expensive, scarce research resources (such as expensive medical equipment).

3. We are all now a node on the network

The mobile phone has changed the game forever. Whilst those in the developed world still have the option of choosing to use a laptop or desktop rather than our phone, in the developing world, mobile has already won that race for dominance. With the cost of a desk computer too high in such countries for general adoption, people just moved straight to cheap (predominantly Android) smart phones. But regardless of the location, the result is that we are all now connected to each other all the time. Cue a wave of opportunities for businesses who are able to build upon that knowledge of people, locations and photographs across the network – in transport/logistics (Uber), payments (Dwolla, Square) and dating (Tinder).

Where The Three Collide

Fred goes on to identify four key sectors in which each of these three mega-trends are making their presence felt in particular:-


It’s obvious that we’re heading for major change in the world of money. I agree with Fred’s view that Bitcoin (or similar) is going to be responsible for so much more than just innovation in payments. It has the potential to become the financial and transactional protocol for the internet that has always been missing. As the standard way in which financial value is exchanged across the web and one that is entirely free from the control of any one party, money will be able to flow as freely and easily as content does today. As a protocol, it will also act as a foundation upon which entrepreneurs can build a whole variety of products and services.


Think of the growth of wearable technology with individuals wearing devices that can report back with details of their vital signs (Fitbit, Fuelband etc). In the future, some of this data will remain personal and private, some will be shared across networks and some will be exchanged solely between you and your doctor, caregiver or family member. Throw gamification into the mix (Fitocracy) and suddenly you’ve got a profound force for good with individuals making positive decisions about how to keep themselves fit and healthy.


When the industrial revolution arrived, the side-effect of such rapid development was the pollution that poured into our environment. By the time we realised and started the clean-up started, almost a century had passed and we faced a far harder task than it could have been had we dealt with it at the time.

Arguably we’re now facing exactly the same problem in the information age – only this time the pollution is data. Every digital activity we carry out leaves data exhaust which is, like it or not, letting other parties observe our activities. Fred’s view is similar to most people that I speak to: most of the time, he’s happy to let the government, Google, Facebook and others spy on him. However, sometimes the services that we’ve used end up recording our activities when we don’t want them to. Therefore, getting some control over this data leakage, both at an individual and a societal level is important.


Currently, many of us sign into services using our identities from other platforms (e.g. Facebook, Google, Twitter etc). Whilst it is extremely handy to use their authentication services, we are essentially giving these companies knowledge about everything that we do. Fred predicts the emergence of a standard protocol that will provide individuals with control over their own identity, trust and data which will be distributed (like Bitcoin, across many thousands of computers), free from any one party’s control and global.

Tick The Boxes

No matter whether you’re a VC, entrepreneur or just a citizen in the modern digital era, Fred’s talk provides plenty of food for thought. Using this framework provides a useful lens through which to watch just how the world will change in the next few years as a result of developments in the tech world.

We’re only just at the starting line: the pace of technological advancement can only accelerate from here on in as networks strengthen and the remaining friction that slows down the voluntary exchange of information between people anywhere across the world disappears completely. So if you’re looking to start up a new business or simply to future-proof the one you have, you could do far worse than take start to consider how to take account of all three.

photo credit: C.P.Storm via cc

A Deluge of Opinions On Uber’s Surge Pricing

Surging ocean waves
The storm continues online around Uber’s surge pricing model

As the weather starts to worsen for us Northern Hemisphere types, it’s been interesting to watch the debate develop around Uber‘s use of surge pricing during a particularly wintery snowy December weekend in New York.

Cards on the table, Uber fascinates me. Whilst I’m not quite as bullish in my assessment of their future as some who are confidently predicting that it’ll grow into a more significant company than Facebook, I’m convinced they’re on the cusp on something huge and far more important than simply providing high-end transport through a slick app that handles payment directly (see my previous post on the two-way feedback mechanism they employ for both drivers and riders). The moment they start to use that data to morph into more than simply transporting customers with high levels of disposable income, things could really start motoring (excuse the pun). To quote Shervin Pishever:

“Uber is building a digital mesh – a grid that goes over the cities. Once you have that grid running in everyone’s pockets, there is a lot of potential for what you can build as a platform”  

Like all modern businesses, there’s a potential goldmine of user data being generated. But it’s the current use of that data that’s the current hot topic. By using surge pricing, Uber relies on an algorithm that temporarily increases the price of a journey when the supply of cars gets tight. Relying on basic economics, a sharp increase in demand for rides (due to weather or infrequent events, such as New Years’ Eve) causes prices to spike upwards in order to entice more drivers out onto the roads to satisfy that demand.

It all sounds fine in principle, although there are plenty of suggestions about alternative models that Uber could be using. But the current problem is that every time they use surge pricing, Uber walks headlong into a customer backlash, fanned by the social media platforms that are so integral to the daily routines of their target customers. Many are now asking the question: is it worth making extra money out of your loyal customers during peak times if it means risking customer dissatisfaction over the longer term?

Of course, variable pricing as a concept is not new. Every time you fly, the chances are that you’ll end up sitting next to someone on the plane who paid a different price. Yet there are still a huge number of companies who leave their prices unchanged whilst supply and demand vary on a daily basis. Is it just the case that we as consumers need to catch up with dynamic pricing models as they become more common? To my mind, it’s not too far-fetched to imagine society moving towards an individual ‘e-bay on steroids’ style of commerce as we become increasingly connected and systems get better at accurately identifying demand.

But for that to happen, customers need to be comfortable about how the prices are being set. In Uber’s case, the app displays a clear message about the temporary price hike before any journey takes place. But it’s prompted a debate about how those prices are set – in this case, how transparent an algorithm can ever be that is used to identify high demand and power the price spikes. Once a company starts to build up significant data about you, it goes without saying trust becomes critical. What happens, for example, if a price rises simply because the data shows that the customer is a regular has always been happy to pay higher prices in the past?

Remember when Amazon tried charging a higher price to regular shoppers who hadn’t cleared their cookies back in 2000? Not their most popular move. Of course, there’s no evidence that this will be Uber’s chosen path. But in the wider scheme of things, it’s possible to see this question being asked more frequently as the market becomes increasingly frictionless, search more powerful and transactions faster to conclude digitally.

One thing that is certain is that Uber is a young business that is making enviable sums of cash. It’s clearly doing something very right by focusing on monetisation (as opposed to traction) far earlier than many other tech giants did at a similar stage. It’ll be interesting to see how it pans out over the longer term however as Uber becomes more ubiquitous.

photo credit: AGrinberg via cc

Is Technology Really Destroying Jobs?

Speeding Technology
Speeding Technology

If you’re reading this blog, the chances are that you’ve got at least a passing interest in technology. At the same time, you’re probably creating jobs for others as an entrepreneur or you’re an employee yourself. Either way, at some point, the question of whether technology could replace jobs – even yours – in the future has probably crossed your mind.

As I’ve mentioned before, the general public tends to overestimate the current ability of robots to rise up and consign mankind to the scrapheap. But clearly there’s still a tension here. After all, any time an industry is disrupted by the development of successful new technologies (think Skype, Netflix, Tata Nano etc), its always likely to result in job losses as the traditional big players struggle to keep up.

In the ideal world, everyone should ultimately win following advances in technology: the consumer gets cheaper, better services and products; the new business creates new jobs; and, as the technologies collide with mainstream demand, there’s an exodus of talent from the existing industries to the new exciting frontiers. The identity of the employer might change but, for the most part, everyone finds something to do and keeps working.

But does the evidence actually back this up?

Rising Productivity But A Slowdown In Employment Growth

Not according to Erik Brynjolfsson, a professor at MIT School of Management, and Andrew McAfee, associate director of the MIT Center for Digital Business at the Sloan School of Management. They argue that current developments in computer technology are destroying jobs more quickly than replacement roles are being created.

The US statistics seem to back this view up. Despite productivity and employment growth enjoying a very similar upwards trajectory ever since the Second World War, things changed abruptly in 2000 when productivity kept rising whilst the growth in employment stagnated.

So have we reached a tipping point in the continual development of new technology? Or is it pointless worrying since people have always found something else to do when faced with unemployment caused by technology in the past?

If we assume (perhaps naively) that the statistics are correct, we face a key question that economists know only too well the world over: can we honestly identify technology as being the main reason for this slowdown – or should we also be looking at the vast range of other macroeconomic factors?

The ‘Hollowing-Out’ Of The Middle Classes

I’m no economist. But putting aside the anecdotal scare stories in the popular press about the threat of faceless technological progress for a moment, the area that’s really of interest to me is how technology is affecting certain types of roles, such as clerical work and professional services. I’m not about to define the middle class here but if we’re looking at trends, it’s clear that computers are being used very effectively in certain areas of the workforce that share similar traits.

A report was released this week by the Oxford Martin Programme on the Impacts of Future Technology which shows that nearly half of US jobs could be at risk of computerisation, with transport, logistics and office roles being the areas most under threat. If you’ve got the time, you can get the full 72-page working paper at ‘The Future of Employment: How Susceptible Are Jobs To Computerisation?

Arguably, it’s those in the clerical and professional jobs that may have more to worry about as computers continue to improve their problem-solving abilities using a combination of artificial intelligence and big data. Take Watson, for example, IBM’s computer that beat the human contestants in a version of the TV show ‘Jeopardy!’ in 2011. That technology is now being directed towards a whole range of areas, including healthcare, customer service, investment advice and cooking.

So, for example, if you look at how Watson is being used in the field of medicine, the computer is now learning how to diagnose patients by combining its ability to assess vast amounts of medical data in conjunction with natural-language processing and analytics that are continually improving. It’s still early days but the potential for this scale of computing power are becoming clear.

How To Keep Your Job

Want to protect yourself? Current thinking is that you need a job further up the chain that requires you to use creative, social and problem-solving skills which will be far harder to automate over the next few years. In these areas, technology isn’t able to replace the individuals but is instead assisting them. Technology is used to enable the employee to do his or her work more effectively – think of the joiner who uses an electric drill to work more efficiently. He doesn’t get his P45 because the employer chooses to employe the drill instead.

An even more relevant example presents itself here when you start to think about the implications of the widespread adoption of technologies such as Google Glass in a work environment. It seems to be vital for employees to maintain a culture of curiosity whilst actively striving to become increasingly technologically literate if they want to continue to pay their bills.

Yet increasing number of people are still required to carry out low-skill jobs. Automation is just not very good yet at replacing janitors, home helps and restaurant workers, for example. Plus it’s important to remember that in many cases, technology is helping businesses not only to survive but also to expand quickly when they’re faced with a lack of available labour to employ to meet the growing demand for their new products and services.

Is It Just A Case Of Learning New Skills?

The reality is that many new technology companies are still heavily reliant on the humans behind the scenes. For example, Amazon might be increasingly dependent on Kiva to replace human warehouse staff with robots, but Kiva itself has a huge demand for new software engineers. The success of that business depends on finding talented individuals to constantly develop improved algorithms to ensure that the robots act more efficiently. Robots have never been good at dealing with change and uncertainty so if your job has that in droves, it’s safe to say that there could be a growing demand for your time.

The Autonomous Economy: Waiting In The Wings

But there’s (at least) one more significant factor that we need to consider when looking at how the development of technology in the modern era differs from the past. And that is quite simply that, in some areas, our economy is now developing without any direct involvement from humans.

Or, more accurately, more can now be done automatically by computers that are learning how to do things as a result of applying themselves to big data using new advances in artificial intelligence and smart analytics. According to W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab, means “digital processes talking to other digital processes and creating new processes”.

Result? We can do more with fewer people and some jobs become obsolete.

Here’s one of his examples. You no longer speak to humans as often when  you check in for a flight these days. Now you simply type your booking number into a machine in the airport. That one simple act sets in course a chain of events that involves many machines speaking to each other simultaneously about a huge range of topics without any human intervention (including flight status, your past history, security checks, seat choice, foreign immigration and, in some cases, making automatic decisions about weight distribution on the plane). Decisions are being made automatically in a way that was inconceivable before the networked internet age.

The Future?

Whether you believe in the argument that technology is destroying jobs or not, it does seem beyond question that income is moving gradually in favour of the so-called ‘tech-savvy’. The untapped potential of computer power, big data and individuals skilled in developing the sector is large enough to drive an exponential advance in digital technologies over the next few decades.

Whatever the eventual outcome, the one thing that’s clear is that change is a constant. The sooner we learn to continually recalibrate our expectations and skills, the more effectively each of us will be able to respond. In my view, anyway. After all, “anyone who claims they can reliably predict the future is a huckster with something to sell you – even if their product is only themselves”.

photo credit: Stuck in Customs via cc

Digital Skills That Pay The Bills

Practice makes perfect
Practice makes perfect

As technology continues to develop, it’s fascinating to watch how people react to enforced changes in the workplace as a result. Some actively try to keep abreast of developments whilst others remain passionately focused on ignoring anything that is not directly relevant (as they see it) to the functions of their daily job. But whilst arguments vary as to how quickly those changes are taking place, it’s becoming increasingly clear that we’re developing a skills gap in the country.

The vital importance of the UK’s digital sector

Economic forecasts from the Boston Consulting Group predict that the digital sector will contribute £225 billion to the UK economy by 2016. To put that in perspective, that’s up from £121 billion in 2010.

With numbers that large, it can be hard to grasp what they mean in practice. So it’s maybe useful to consider one statistic in particular: how much the internet contributes to UK GDP when compared to other European countries. When the UK has been identified as ‘the most internet-based major economy‘, it’s a safe bet to assume that we face greater potential opportunities than many other places – for example, UK citizens spend on average £1,083 per year on online shopping, compared to say those in France (who spend £487 p.a.). I don’t think that we’re a country that’s so far ahead of our peers competitively that we can afford to ignore the kind of opportunity that comes from this level of online activity.

Recently the CEO of O2, Ronan Dunne, wrote an article that was widely shared online about ensuring that we build a workforce that’s fit for the digital age. With the UK economy now starting to turn the corner, it seems to me this is a pretty significant issue for us all going forwards. He argues that we should be looking at developing three areas in particular.

1. Digital infrastructure

Using technology that helps people be increasingly flexible about how and where they work has obvious advantages whilst the decreasing cost of more efficient technologies can help to protect jobs that are currently threatened by cost-cutting.

2. Digital transparency

Most people are aware to some extent of the upcoming privacy battles in the technology industry. For example, I’ve previously touched on the challenges that the widespread adoption of Google Glass might represent and there are other key areas as the number of devices that connect to the web via the internet of things increase that will continue to create challenges for us all. However, Roan argues:-

“If we are to make the most of the big data opportunity, business and government need to take collective responsibility for helping the public to better understand the value exchange

I think this is a key point. If you’re running a business, it’s up to you alone to convince your customers that by choosing (and that choice is the key) to share their valuable personal information with you, they will be rewarded with a far more efficient and enjoyable shopping experience as a result. For every modern business, it’s my view that building that trust by continually ‘getting it right’ (for which read not assuming that a customer has somehow given implied approval to your intrusive and unwanted marketing campaigns) is absolutely crucial for long-term success.

Of course, others have varying views. You might subscribe to the Zuckerberg belief that the age of privacy is over or disagree on principle with any privacy statements that are uttered by anyone who has a financial interest in the outcome.

Or perhaps it’s not quite that simple. J P Rangaswami, Chief Scientist at Salesforce, gave a fantastically powerful talk at the recent Turing Festival in which he reminded everyone that in the days before the costs of mass migration dropped and people became strangers to their neighbours, it was entirely normal to only ever buy goods and services from people that you knew personally, where privacy was no big deal. It’s a source of conflict but, whatever the outcome, there’s no denying that as more commerce (and real life) is conducted using technology, these issues will only get more acute.

3. Digital literacy

Yet it’s this final point that was the original reason for this meandering blog post. For the country to make genuine headway and grasp the opportunities that lie ahead, we need to get the talent in place. And the incredible thing is – for the most part, that talent’s available already. We’re just not using it properly.

It might be a truism to say that many young people in the country already have many of the digital skills that are necessary to fill the gaps that will become more acute with each passing month. But it’s a truism because, quite frankly, it’s true. In May to July 2013, 960,000 young people aged 16-24 were unemployed. When many current business leaders are struggling to keep up with the pace of change, it’s hard to believe that many of those digital natives don’t possess exactly the type of digital skills that are going to be increasingly required by businesses in the developing environment.

Who’s Going To Lead The Charge?

But who’s going to stand up and take responsibility for this? As Simon Devonshire writes on his blog:-

“If we believe it is Government, then exactly which Minister is accountable for the digital transformation of the economy? I don’t think we have one. I’m not aware that the Bank of England has anyone focused on understanding the economic impact of the internet, despite the UK’s lead of e-commerce as a percentage of GDP. Universities offer computer science education, but that is only one of the ingredients necessary to realise the digital opportunity”

We’re in real danger of losing ground here by sitting back and relying on others to make the necessary changes. Some progress is being made — for example, the decision to make coding (sort of) compulsory in UK schools from 2014. But at the same time, we can’t assume that the current system of education was developed many years ago with the best structure to deliver this. I’ve mentioned the quote on this blog that “65% of kids at school today will end up in jobs that have not yet been invented”. Parents could do worse than helping their kids to learn coding at an early age. Not sure where to start? Check out CodeAcademy or another option on this list of resources for inspiration.

But it will take more than this.

We Need A Culture of Curiosity

As the demand for workers with digital skills is exploding, we need to train the young to learn the skills that will make them employable over the coming years whilst working hard to fill existing skills gaps in businesses today with those who are currently desperate for a job.

When you consider that the Internet has only been around (in broad terms) for thirty years or so, it’s important to remember that every single one of us – from the executive who understands no more than simple email to the most advanced coder – has been a learner at some point. As Gillian Andrews writes in her blog:-

“Part of what needs to be learned is how to learn, over and over again. Simply learning where the button is for ‘cut’ or ‘undo’ is not enough.”

To me it seems that it’s not necessarily about teaching the skills quite so much as ensuring that we each develop the curiosity that’s required to adopt a mentality where we’re all hungry to learn. Each and every day. And that responsibility falls squarely on each of our shoulders as individuals.

If you’ve got any thoughts, I’d be interested to hear them.



photo credit: rolvr_comp via cc

Robots: Seeking Jobs Or World Domination?

Robot from Edinburgh University
Dark The Robot is a very friendly chap

I’ve always been interested in robots. I don’t know who’s to blame – R2D2, Twiki or the Gunslinger. I have a soft spot for books like ‘Robopocalypse‘ and actively seek out discussions about how long we have to wait until we hit the technological singularity. So when I was asked by the Beltane Public Engagement Network (thanks Sarah!) to go along to one of their events titled ‘Robots Rise‘, it’s fair to say there wasn’t too much arm-twisting going on.

Robotic Historic

At first, the idea of discussing robots in the lavish and dated wooden and mirrored surroundings of The Famous Spiegeltent felt slightly surreal. Now I realise it was ideal. Why? The Spiegeltent was built in 1920 – exactly the same year that the word ‘robot’ was used for the first time ever in a play called ‘R.U.R. (Rossum’s Universal Robots)‘ by Karel Capek.

The word ‘robot’ in Capek’s native Czech means the forced labour that serfs were required to carry out on their master’s lands. Of course, in fiction, robots often appear as metaphors for human problems, whether slavery or racism. But as they become increasingly visible in society, will we end up teaching them such concepts as cruelty – or will they be capable of learning such flaws themselves? In essence, how human do we actually want our artificial intelligence to be?

The session was led by Subramanian Ramamoorthy, Lecturer in Robotics at Edinburgh University who gave his expert views on how far robots are already intertwined with our daily lives and how much further that’s likely to develop. A fascinating chat, here’s what I took away from the session:

When Will Robots Take Over The World?

Let’s cut to the chase and start with the million dollar question.

The answer? Not any time soon. I get the sense that it’s a question that researchers get asked way too often.There’s various reasons why robots actually taking over the world is unlikely but high up there on the list is the simple fact that there’s no logical reason that they’d want to. Even humans don’t seek world domination (well, most of us). And, even if they did change their minds, their batteries wouldn’t last (honestly).

Interestingly, many people seem to assume that robots will develop some malevolent intention as they evolve – perhaps a view that’s been heavily influenced by Hollywood (e.g. Skynet). Yet the reality is that most developments in robotics currently focus on assistive, rather than disruptive, technologies. The most obvious future uses of robotics involve helping humans to carry out manual and repetitive tasks (for example, cleaning cups) or remote exploration, for example.

Still, despite all of the evidence to the contrary by the experts, I still find it hard to ignore the march of progress under Moore’s Law and this animated graphic which shows just how long until computers will have the same power as the human brain. Makes you think, doesn’t it?

Will Robots Take Our Jobs?

At one level, it’s already happening. If you’ve ever ordered a book from Amazon, it’s likely to have been physically selected for you in the warehouses by a robot. Amazon didn’t pay $775 million to buy Kiva Systems Inc. last year for nothing. Returning to their charging stations automatically, these 24-hour workers won’t be asking for a coffee break any time soon.

Of course, there is always the possibility of unrest if robots displace vast numbers of workers. But in many ways, the more interesting question is how this technology could be applied to complement existing human roles. Consider how we currently search for a missing person for example. If there’s no trace found, it may be very difficult to justify the cost of a policeman searching a remote location for an extended period of time. But the cost/benefit analysis of asking a robot to carry out the task for an extended period of time may look entirely different.

For example, it’s not hard to see how any army will be able to make use of these (don’t worry, you’re not alone if you start to get a little creeped out by progress here):-

Robots In Space

Robots have been up in orbit for a while. But far from simply replicating fiction, it’s useful to understand why they’re actually required. Whilst an astronaut’s job might appear glamorous, the reality is that much of the daily routine is just that – a mass of repetitive boring jobs. I suspect few astronauts dreamed that the spectrum of tasks that they’d be required to carry out when pushing the boundaries of mankind would involve quite so many requests to empty the toilet on the International Space Station…

Robots are great at the manual tasks. Robonaut 2 is by all accounts carrying out a great job on the ISS and what’s more, he’s pretty funny on Twitter too (@AstroRobonaut).

But Why Focus on Humanoid Shapes?

The question was asked why we seem to be focusing on building more humanoid robot shapes than purpose-built structures. It’s clear that having a cute wee fella that speaks to you like Dark the Robot (pictured above) on a stage brings a favourable response that gets people talking. It’s almost PR for the field as it entices people into learning more about the subject.

There seem to be different lines of thought on this topic and the question of whether we are focusing on developing humanoid robots too readily is a source of real debate within the robotic community that’s likely to continue.


Those of us who live outwith the rarefied circles of AI/robotic research but within ready reach of great films appear to have an overly-optimistic assumption about the current rate of progress. Continued developments enable us to continually improve but the evolution of our robotic abilities still lags behind when compared to the development of a human child, for example. Progress is being made but it’s important to remember that in general, we’re still only able to teach robots how to carry out certain tasks with effort – we might have developed a robot that learned how to fold towels but it’s still taking 25 minutes per towel.

An Ethical Stramash?

Surprisingly not, for the most part. Despite only making incremental advances in the development of robotics, those involved in robotics apparently get asked questions about ethics frequently, way before any such issues could be faced. The reality is that, except in very specific areas (such as medical technology), researchers are still a long way from having to really tackle particularly taxing ethical problems.

So What Does The Future Hold?

Good question. Everything. And yet, many important limitations remain.

When you actually see a robot in the flesh (so to speak) as I did on Friday, I couldn’t help but be struck again by precisely how again complex they really are. OK, so we might laugh at their basic footballing skills, but the reality is that the work that’s taken place to get to that stage is incredible.

The reason that any robot ever moves is down to a complex combination of factors involving software programming and hardware – every joint contains a motor, that is activated by programming in combination the use of its other senses, such as vision (identifying colour and shape), touch sensors, accelerometers and the use of sonar, amongst others. Putting all that together so that it works as intended is no small task, to say the least.

One of the stated goals of the Robot World Cup is to evolve the technology so that a team of robot footballers can actually defeat the human World Cup winners by 2050. Is it likely? I don’t see why not when you take a look at the most recent robot from DARPA.

Or Is The Future Already Here?

If you really think about it, we’re actually pretty far down the line in some ways already. Estimates state that by the end of 2013, there will be one smartphone for every five people in the world. To recycle the often-repeated statement, every single one of those has processing power far in advance of that used by the Apollo moon landing programme (as an aside, I just found out that you can actually build your own working replica NASA Apollo Landing Computer if you’ve got both the inclination and a spare $3,000).

Then consider what Google and the other search engines are accomplishing by indexing the word’s information. Start to tie that data in with what might be capable via wearable technology such as Google Glass and you really start to get a glimpse of the future.

For now, it seems that the field is focused on building fundamentally better robots (physically) whilst improving the existing skills of interaction (via programming advances). We’re still a long way away from developing robots that are self-powered with the ability to repair themselves at will. But whatever the evidence to the contrary, I can’t help but think that this is another area where things are just going to accelerate in the future.

It’s a fascinating topic. I’d love to fast-forward twenty years and revisit this post again. But in the meantime, I’ll leave you with one thought.

Rapidly ageing population of the world – meet ASIMO.


How Adaptive Learning Could Change Education Forever

Adaptive Learning - Brain Cap

There’s a famous quote that’s been doing the rounds for a few years from the US Department of Labor.

 65% of the kids at school today will end up in jobs that have not yet been invented

Accurate or not, there’s no doubt that those responsible for designing education systems are facing some real challenges. After all, how do you teach students effectively if you’ve got no idea of the skills they need to be successful?

With an uncertain future, the one thing that is clear is that there are opportunities for us to improve the system of education that’s in place and which hasn’t, in many cases, changed for decades. I’m coming across increasing numbers of articles online about the challenges – for example here and here. Now, I’m far from an expert in this area. But as a keen advocate of the disruptive potential of Massive Open Online Courses (or MOOC’s, as they’ve become known) (tip: check out Steve Blank’s excellent ‘How To Build A Startup’ course on Udacity) and the author of an MBA dissertation on open access publishing, I’m pretty certain that removing the barriers to learning that currently exist can only be a good thing.

Knewton And Adaptive Learning

When I was in Glasgow at the Digital 2013 conference last week, Tom Hall (@tomjhall) from Pearson introduced me to something that had – somehow – passed me by so far. Knewton is an adaptive learning platform that provides personalised educational content. It accumulates data from students as they move through the learning process and then uses that information to make the process more effective by personalising the experience for each individual.

The net result is that the curriculum adapts to the needs of each user directly and then pieces together the perfect individual bundle of content for each student. To put it simply: if it works well, more people will be more engaged and therefore successful at learning as they will be presented with the information (and subsequently tested on it) in a way that suits them, as opposed to the ‘one-size-fits-all’ mentality that we’ve been forced to adapt up until today.

Big Data And Education

To me, it sounds like a no-brainer. It’s a fascinating use of Big Data (there’s that buzz word again) that has the potential to create significant change for the better on a massive scale. What’s great is that the success should be organic: the more students that use the platform, the better the experience should become. In the same way that search engines benefit from critical mass, the more information there is, the more accurate the personalisation to each student. Check out the video below for a bit more information.

Losing The Human Touch?

Of course, not everyone agrees. After digging around a little further online for information, I came across a fascinating and incredibly detailed blog post by Phil Macrae, a Canadian ‘explorer in the field of education’. Please do give it a read for a far more insightful commentary on the area than I could ever provide here. It provides a considered warning to such tech-evangelists (…guilty) against the simple replacement of the human dimension of learning with a ‘teaching machine’.

I’m relatively new to this subject so I have to remain open to persuasion. But it’s clear that Knewton are making waves. So far, they’ve attracted the notice of some pretty heavyweight players, including Pearson and investment from the Founders’ Fund and they’ve been recognised as a Technology Pioneer by the World Economic Forum in Davos (following in the footsteps of companies like Twitter, Firefox and Paypal).

The Potential Of Personalisation

It’s hard to say what education will look like in the future. Information is increasingly become simple to store and retrieve, disrupting the traditional methods that have been employed for many years. So it’s easy to believe that, like so many other areas of real life, personalisation must come into play whilst technology continues to drive down the costs. And that – surely – can only be a good thing.

Regardless of whether it’s likely to succeed or doomed to failure, as both a bystander and a parent, I can only be in favour of Knewton and what they’re looking to achieve. Whether they are the ones who actually succeed, by going after such an audaciously large goal, or simply prove to be the first of a new wave of technologies however still remains to be seen.

Do you think that technology will ever replace the need for classroom-based teaching entirely? Or would relying on technology too heavily within a learning environment actually damage the process? I’d be interested to hear your comments below.  

photo credit: University of Maryland Press Releases via cc

Why Google Glass Is Only The First Step


Ready For Take Off
Ready For Take Off

The Start of Something Big?

On the week that the first Explorer editions are being shipped to developers, I’m hardly alone in my excitement about just how important  Google Glass could turn out to be. Not only for the applications that we can imagine here today, in April 2013. But more importantly for the potential that this type of technology brings for advancements across areas that we haven’t yet considered.

If you view it as a building block for the re-imagining of almost every daily activity, from work, sport  or just basic methods of human communication, we can have no idea at this stage of how significant this next move into mobile computing/augmented reality will prove. However, I’m betting on it being a huge jump forwards.

I’m sure there will be issues with version 1 but we’ve got to be careful not to have unreasonable expectations. Bleeding edge products always lack the initial crucial customer feedback that can only come once you’ve let third parties loose on your product. And it’s precisely in that area, where people start to see how the technology could be used in their every day lives and make the necessary adaptions, that will drive a steep growth in its popularity.

…Or The Green Light For Conflict?

But putting the optimism to one side for a minute, it’s obvious that the path towards widespread adoption is not going to be straightforward. Moving past the geek-attraction phase (ooh, it’s shiny, I want one of those…), the technology unearths a whole raft of issues that will inevitably cause tension between different groups.

By far the best article that I’ve read recently about the impact of Glass is by Jan Chipchase, Executive Creative Director of Global Insights at Frog. It’s well worth taking the time to read through this, particularly given the calibre of the author. For a product that’s both “on your face and in your face”, he argues that Google is the right company to bring this technology to market as:-

[a business with] a recent record of genuine innovation that stretches/defines social and behavioural norms with a strong revenue stream and deep enough pockets to have a fighting chance of medium to long-term success.

Privacy And The Invisible Impact

Positions are starting to be taken on either side of the privacy debate around Glass. Yet amongst such high profile posturing, few hold solid research about how the human condition will be affected, consciously or otherwise, when we become acutely aware of someone wearing technology which can record our every move. How many of us would think twice before making a statement in the future if we knew that it was to be recorded and retrievable by a company whose goal was to index that data for the purposes of serving ever-more relevant advertising to you? As Chipcase writes:

Any idiot can collect data. The real issue is how to collect data in such a way that meets both moral and legal obligations and still delivers some form of value.

An Argument For The Wider Public Good?

One way to ease the widespread adoption of Glass is to enable anyone to access on demand the video feed being recorded by others around them. Transparency of information will no doubt help ease a few concerns whilst crowd-sourcing views to make them collectively useful is likely to convince people of the wider public good in certain situations, with emergency situations or entertainment events being the most obvious.

Regardless, It’s Happening

The issues surrounding the introduction of Glass – whether in terms of privacy, the ownership of data, legislation or the evolution of basic body language in a social setting – are only just now starting to be considered. But I for one can’t wait to see how things move forwards. There are bound to be mistakes but progress demands failures along the road.

You may not agree with Ray Kurzweil et al about his predictions about the approaching singularity – the point when technology and humanity are will no longer be separate (current predictions point to 2040). But this looks very much to me like a significant jump forwards along that path. And, one way or another, whether in Google’s hands or elsewhere, it’s going to happen. And it’s going to be a helluva ride.

photo credit: vyxle via cc

Filling In The Gaps With Tim O’Reilly

Halloween EyesTim O’Reilly is a guy that you should really pay attention to. He’s been a leading commentator around key technology areas such as publishing, Web 2.0, open data and the burgeoning Maker movement for a number of years. The organisation he founded, O’Reilly Media, lives by the mantra of ‘changing the world by spreading the knowledge of innovators’ and he’s viewed as something of a master at both identifying trends and amplifying them.

Despite this reputation, he claims that he’s not very good at predicting the future. Instead, he simply talks about things that are happening today that seem interesting to him. As William Gibson once said, “The future is already here – it’s just not very evenly distributed” and O’Reilly’s mission is to help smooth that process.

He recently gave a talk to the Stanford Technology Ventures Program which is well worth checking out. Here are a few key themes from the talk:-


The importance of mobile computing is clear to anyone looking at website analytics. Yet we’re still very much in the early days of mobile and the change that’s coming is much more fundamental than simply a shift in the way that people access your website whilst on the move. Why? It’s all down to that piece of technology that you carry in your pocket which increasingly knows more – and better – information about you as an individual.

Why do applications like Foursquare or Runkeeper, for example, still need us to take an active part? Why do you have to check-in or click a button to tell your phone that you’ve started running? It already knows this information. There’s a revolution coming as businesses get built on the foundation of information that individuals don’t even have to go to the effort of submitting themselves. It’s all being done for them.

You’ve probably heard of Square. Set up by Jack Dorsey, founder of Twitter, it lets users accept credit card payments on their mobiles. But the clever thing is that if you have the App open on your mobile, you can walk into a shop that’s using it and the cash register already knows that you’re there. That connection’s already been made – it’s live and waiting, ready for you to use it.


Too often, we’re still thinking about software as being something that lives inside a device. A good example is Linux. For many, it’s at best some kind of mysterious operating system that tech folk discuss and has no relevance to the laptop they use for work. Yet if you’re searching on the web, the chances are high that you’re using Google – which is powered by Linux.


Think about how to update the work of employees to make use of the fact that individuals will increasingly be connected with the internet in more powerful ways in professional settings.

Who doesn’t love the videos of skydiving prototypes of Google Glass? But whilst the current excitement (and concern) is currently focused on the consumer applications of this technology, once you start thinking about  how these technologies could potentially impact workflows, a new picture emerges. Give people ready access to the indexed knowledge of mankind and it’s fairly easy to imagine how certain low-level jobs can be turned into high-level jobs. After all, why train for years to learn something when you can simply follow live instructions?


The phrase comes from a 1960 research paper by JCR Licklider that foresaw the development in cooperation between men and computers. The technology businesses (such as Google and Amazon) that survived the crash before moving on to further success did so, at least in part, because they worked out how to get their users to contribute to what they did. Take a bow, Web 2.0.


Peter Norvig, Chief Scientist at Google once said: “We don’t have better algorithms. We just have more data”. Look at Google’s self-drive car. In 2005, the winner of the DARPA Grand Challenge, a competition for American driverless vehicles, drove 7 miles in 7 hours. Yet only six years later, Google has designed an autonomous car that has driven hundreds of thousands of miles in ordinary traffic. So what changed?

Google had access to the data behind Google Street View. Or, to put it another way, the recorded memory of humans who drove those roads, stored in a global brain.


Investor Chris Sacca has been quoted as saying “What I learned from Google is to only invest in things that close the loop”. That is an incredibly important principle for startups who should always be trying to discover the loops in the world that their business can close.

For example, Uber is a taxi business that connects individuals with luxury cars for hire. The app knows the location of both the passenger and the driver and makes the connection so that you always know precisely where the vehicle is. Uber has closed the loop.

Think of just how powerful the business becomes when combined with the rating mechanism that I mentioned in an earlier post has been integrated. A relatively small change now has the potential to completely disrupt the traditional regulation of taxi cab services. No longer does a cab driver just need to be trained and certified. In the modern world, he or she must also display social validation in the form of positive customer feedback – a bleak future no doubt for those drivers who drive carelessly, treat customers rudely or even play music too loudly in their vehicle.


O’Reilly argues that the concept of a business that exists solely for the purpose of making money for its shareholders is fundamentally flawed. Every business has an obligation to create value.

Current high-profile tech businesses (see Etsy, airbnb and Kickstarter) are successful precisely because they’ve focused on building an economy around their business. It is not simply about making money for themselves – they want other people to succeed on the back of what they’ve created in building an ecosystem.


To paraphrase a poem by Rilke:

    What we fight with is so small that when we win it makes us small; what we want is to be defeated decisively by successively greater beings.

Find hard problems. Take the example of a guy who quit a well-paid role with a hedge fund to work for a high-altitude (and high-risk) wind energy company. When asked why, he had one simple answer. He’d wanted to work for the startup was because ‘the math is harder’.

People who want to work on a hard problem are the types of individuals you want with you in a startup. If you can get people to work on things that matter and inspire, it will carry far more weight than being driven by simple monetary gain.


So – how hard can it be?

Simplify, move to the cloud, automate, enhance intelligence, collect better data, help other people succeed and set goals worthy of your efforts.

There has to be a business idea or two in there, don’t you think?

(photo credit: Patrick Hoesley via photopin cc)

Welcome To The Internet of Everything

RobotIf you held a gun to my head and forced me to make a prediction about the shape of our future society, the one forecast that I could make today with absolute certainty is that I’d end up rambling on about the possible impact of technology until you were either bored or agreed. However, it’s a safe bet that almost every issue duking it out at the top of my list would relate in some way to the increasing impact of the deepening networks between people, objects and ‘big data’.

The Growth Of Networks

The increased power of connectivity has been most visible to most in the explosive growth of social media. Most of us are gradually realising that individuals are increasingly being tracked whilst simultaneously being presented with greater opportunities to make a significant impact by presenting themselves favourably online. But to my mind, simply taking offline communication and rehousing it on a digital platform has in many ways simply been a diversion so far (albeit incredibly an powerful, and essential, one).

It seems to me that networks are only now starting to show what they can achieve – both at scale and (just as importantly) at high speed. Not agree? Just have a think about the impact over recent times in areas such as file sharing, user-generated content, communicable diseases, the wisdom of the crowd and financial contagion.

The chances are good that you will have heard the growing buzz around the ‘Internet of Things‘. If the phrase at least is new to you, here’s my (very) simple explanation. The internet to date has relied on humans to input the necessary data. But, as Kevin Ashton, the RFID pioneer once wrote, “The problem is that people have limited time, attention and accuracy”. The answer? Let the computers find out everything that they need to know about things in the real world by gathering data directly without the input of humans. Let them then use this increased and continual tracking for the benefit of all, whether it be to reduce waste, loss and cost.

(Aside: I’ve just realised that I’ve used the word ‘let’ above twice, unintentionally implying that computers are somehow sentient beings… that’s another post entirely …)

Connections Are Everything…

It’s not a new idea but it’s gathering pace like never before. Millions of new devices are regularly being connected to the internet and the potential for advancement goes far beyond simply having a Robonaut sending tweets from space.

So what could that mean in practice? Well, Evrythng is a company that I mentioned in a previous post – here’s a quick example of how their business fits into the picture:-

Let’s just think about this for a second. Connecting things that were previously silent within a network of networks with billions (or trillions) of connections on a global scale? The potential impact of this is so significant that it is guaranteed to eclipse even the massive disruption that the internet has already caused within the worlds of communication, education, business, entertainment and simple information retrieval amongst others.

The ‘Internet Of Everything’ Economy

So I was intrigued to read Cisco’s new report in which they have re-branded the concept into ‘The Internet of Everything Economy: How More Relevant And Valuable Connections Will Change The World‘. With a vast subject, they’ve helpfully split the concept into four separate areas:-


Currently, we mostly just connect to the internet via devices and social networks. In the future, people will connect in far more flexible ways – think of the implications for healthcare where a pill or clothing sensors will report back on your vital signs directly to your doctor via a secure internet connection.


We’re used to devices gathering and transmitting data somewhere for analysis. Increasingly, the ‘things’ will combine and analyse that raw data before transmitting far more valuable information, which in turn enables us to make faster, more intelligent decisions.


Sensors and devices will be able to sense more data and increasingly understand the possible context in which that data can be useful.


Critically, processes are still to develop which will add value – effectively working to ensure that the right information is delivered to the right person at the right time in the appropriate way.

Clearly, whilst we might initially moan at the loss of sleep, few of us would refuse an alarm clock that wakes you up ten minutes earlier on a morning when the traffic is congested. We may even be increasingly interested in being able to track our groceries from field to table, if only to guarantee that our meat hasn’t had a past career in racing.

But once we start to apply this understanding at a greater scale, we have the potential to discover a means by which we can monitor, understand and manage our environment more effectively. Saving energy, regulating and distributing agricultural output and helping to provide access to clean water starts to tick a lot of boxes which bluntly need massive action given the current challenges that we are all facing.

The Internet Of Everything: Tomorrow Starts Here.

Clearly Cisco have a business point to make here also and I recommend reading the blog by Cisco CEO John Chambers which makes a number important points about the economy that is predicted to grow around the Internet of Everything. After all, there are some chunky numbers involved in the predictions (which they are, it should be said, in a great position to capitalise upon commercially).

Increased Connectivity Comes At A Price

But let’s not be naive. It’s not going to be plain-sailing all the way home in our automated cars to our self-filling fridges. If developments continue – and there’s no reason to think that we will face anything other than developments occurring at a rapidly accelerating pace – we could soon be facing challenges in relation to security and privacy that could place us in a very dark place indeed within society.

Governments, businesses, organisations and individuals need to get involved – and quickly. Not to slow it down or even simply to prevent any mistakes being made – there will always be wrong turnings in any attempt to innovate. But to ensure that we use the unprecedented opportunities that we will soon be faced with for the most effective purposes for us all – in business, in our communities and on a global scale.

Think it’s all pie in the sky? Or are we walking blindly into disaster by arming inanimate objects with the capacity for (limited) intelligence? There’s a sci-fi plot or two in there, that’s for sure. I’d love to hear from you in the comments below if you have any thoughts about future developments.

(photo credit: B.Romain via photopin cc)