As the weather starts to worsen for us Northern Hemisphere types, it’s been interesting to watch the debate develop around Uber‘s use of surge pricing during a particularly wintery snowy December weekend in New York.
“Uber is building a digital mesh – a grid that goes over the cities. Once you have that grid running in everyone’s pockets, there is a lot of potential for what you can build as a platform”
Like all modern businesses, there’s a potential goldmine of user data being generated. But it’s the current use of that data that’s the current hot topic. By using surge pricing, Uber relies on an algorithm that temporarily increases the price of a journey when the supply of cars gets tight. Relying on basic economics, a sharp increase in demand for rides (due to weather or infrequent events, such as New Years’ Eve) causes prices to spike upwards in order to entice more drivers out onto the roads to satisfy that demand.
It all sounds fine in principle, although there are plenty of suggestions about alternative models that Uber could be using. But the current problem is that every time they use surge pricing, Uber walks headlong into a customer backlash, fanned by the social media platforms that are so integral to the daily routines of their target customers. Many are now asking the question: is it worth making extra money out of your loyal customers during peak times if it means risking customer dissatisfaction over the longer term?
Of course, variable pricing as a concept is not new. Every time you fly, the chances are that you’ll end up sitting next to someone on the plane who paid a different price. Yet there are still a huge number of companies who leave their prices unchanged whilst supply and demand vary on a daily basis. Is it just the case that we as consumers need to catch up with dynamic pricing models as they become more common? To my mind, it’s not too far-fetched to imagine society moving towards an individual ‘e-bay on steroids’ style of commerce as we become increasingly connected and systems get better at accurately identifying demand.
One thing that is certain is that Uber is a young business that is making enviable sums of cash. It’s clearly doing something very right by focusing on monetisation (as opposed to traction) far earlier than many other tech giants did at a similar stage. It’ll be interesting to see how it pans out over the longer term however as Uber becomes more ubiquitous.
If you’re reading this blog, the chances are that you’ve got at least a passing interest in technology. At the same time, you’re probably creating jobs for others as an entrepreneur or you’re an employee yourself. Either way, at some point, the question of whether technology could replace jobs – even yours – in the future has probably crossed your mind.
In the ideal world, everyone should ultimately win following advances in technology: the consumer gets cheaper, better services and products; the new business creates new jobs; and, as the technologies collide with mainstream demand, there’s an exodus of talent from the existing industries to the new exciting frontiers. The identity of the employer might change but, for the most part, everyone finds something to do and keeps working.
But does the evidence actually back this up?
Rising Productivity But A Slowdown In Employment Growth
The US statistics seem to back this view up. Despite productivity and employment growth enjoying a very similar upwards trajectory ever since the Second World War, things changed abruptly in 2000 when productivity kept rising whilst the growth in employment stagnated.
So have we reached a tipping point in the continual development of new technology? Or is it pointless worrying since people have always found something else to do when faced with unemployment caused by technology in the past?
If we assume (perhaps naively) that the statistics are correct, we face a key question that economists know only too well the world over: can we honestly identify technology as being the main reason for this slowdown – or should we also be looking at the vast range of other macroeconomic factors?
The ‘Hollowing-Out’ Of The Middle Classes
I’m no economist. But putting aside the anecdotal scare stories in the popular press about the threat of faceless technological progress for a moment, the area that’s really of interest to me is how technology is affecting certain types of roles, such as clerical work and professional services. I’m not about to define the middle class here but if we’re looking at trends, it’s clear that computers are being used very effectively in certain areas of the workforce that share similar traits.
Arguably, it’s those in the clerical and professional jobs that may have more to worry about as computers continue to improve their problem-solving abilities using a combination of artificial intelligence and big data. Take Watson, for example, IBM’s computer that beat the human contestants in a version of the TV show ‘Jeopardy!’ in 2011. That technology is now being directed towards a whole range of areas, including healthcare, customer service, investment advice and cooking.
So, for example, if you look at how Watson is being used in the field of medicine, the computer is now learning how to diagnose patients by combining its ability to assess vast amounts of medical data in conjunction with natural-language processing and analytics that are continually improving. It’s still early days but the potential for this scale of computing power are becoming clear.
How To Keep Your Job
Want to protect yourself? Current thinking is that you need a job further up the chain that requires you to use creative, social and problem-solving skills which will be far harder to automate over the next few years. In these areas, technology isn’t able to replace the individuals but is instead assisting them. Technology is used to enable the employee to do his or her work more effectively – think of the joiner who uses an electric drill to work more efficiently. He doesn’t get his P45 because the employer chooses to employe the drill instead.
Yet increasing number of people are still required to carry out low-skill jobs. Automation is just not very good yet at replacing janitors, home helps and restaurant workers, for example. Plus it’s important to remember that in many cases, technology is helping businesses not only to survive but also to expand quickly when they’re faced with a lack of available labour to employ to meet the growing demand for their new products and services.
Is It Just A Case Of Learning New Skills?
The reality is that many new technology companies are still heavily reliant on the humans behind the scenes. For example, Amazon might be increasingly dependent on Kiva to replace human warehouse staff with robots, but Kiva itself has a huge demand for new software engineers. The success of that business depends on finding talented individuals to constantly develop improved algorithms to ensure that the robots act more efficiently. Robots have never been good at dealing with change and uncertainty so if your job has that in droves, it’s safe to say that there could be a growing demand for your time.
The Autonomous Economy: Waiting In The Wings
But there’s (at least) one more significant factor that we need to consider when looking at how the development of technology in the modern era differs from the past. And that is quite simply that, in some areas, our economy is now developing without any direct involvement from humans.
Or, more accurately, more can now be done automatically by computers that are learning how to do things as a result of applying themselves to big data using new advances in artificial intelligence and smart analytics. According to W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab, means “digital processes talking to other digital processes and creating new processes”.
Result? We can do more with fewer people and some jobs become obsolete.
Here’s one of his examples. You no longer speak to humans as often when you check in for a flight these days. Now you simply type your booking number into a machine in the airport. That one simple act sets in course a chain of events that involves many machines speaking to each other simultaneously about a huge range of topics without any human intervention (including flight status, your past history, security checks, seat choice, foreign immigration and, in some cases, making automatic decisions about weight distribution on the plane). Decisions are being made automatically in a way that was inconceivable before the networked internet age.
Whether you believe in the argument that technology is destroying jobs or not, it does seem beyond question that income is moving gradually in favour of the so-called ‘tech-savvy’. The untapped potential of computer power, big data and individuals skilled in developing the sector is large enough to drive an exponential advance in digital technologies over the next few decades.
As technology continues to develop, it’s fascinating to watch how people react to enforced changes in the workplace as a result. Some actively try to keep abreast of developments whilst others remain passionately focused on ignoring anything that is not directly relevant (as they see it) to the functions of their daily job. But whilst arguments vary as to how quickly those changes are taking place, it’s becoming increasingly clear that we’re developing a skills gap in the country.
With numbers that large, it can be hard to grasp what they mean in practice. So it’s maybe useful to consider one statistic in particular: how much the internet contributes to UK GDP when compared to other European countries. When the UK has been identified as ‘the most internet-based major economy‘, it’s a safe bet to assume that we face greater potential opportunities than many other places – for example, UK citizens spend on average £1,083 per year on online shopping, compared to say those in France (who spend £487 p.a.). I don’t think that we’re a country that’s so far ahead of our peers competitively that we can afford to ignore the kind of opportunity that comes from this level of online activity.
Using technology that helps people be increasingly flexible about how and where they work has obvious advantages whilst the decreasing cost of more efficient technologies can help to protect jobs that are currently threatened by cost-cutting.
“If we are to make the most of the big data opportunity, business and government need to take collective responsibility for helping the public to better understand the value exchange“
I think this is a key point. If you’re running a business, it’s up to you alone to convince your customers that by choosing (and that choice is the key) to share their valuable personal information with you, they will be rewarded with a far more efficient and enjoyable shopping experience as a result. For every modern business, it’s my view that building that trust by continually ‘getting it right’ (for which read not assuming that a customer has somehow given implied approval to your intrusive and unwanted marketing campaigns) is absolutely crucial for long-term success.
Or perhaps it’s not quite that simple. J P Rangaswami, Chief Scientist at Salesforce, gave a fantastically powerful talk at the recent Turing Festival in which he reminded everyone that in the days before the costs of mass migration dropped and people became strangers to their neighbours, it was entirely normal to only ever buy goods and services from people that you knew personally, where privacy was no big deal. It’s a source of conflict but, whatever the outcome, there’s no denying that as more commerce (and real life) is conducted using technology, these issues will only get more acute.
3. Digital literacy
Yet it’s this final point that was the original reason for this meandering blog post. For the country to make genuine headway and grasp the opportunities that lie ahead, we need to get the talent in place. And the incredible thing is – for the most part, that talent’s available already. We’re just not using it properly.
It might be a truism to say that many young people in the country already have many of the digital skills that are necessary to fill the gaps that will become more acute with each passing month. But it’s a truism because, quite frankly, it’s true. In May to July 2013, 960,000 young people aged 16-24 were unemployed. When many current business leaders are struggling to keep up with the pace of change, it’s hard to believe that many of those digital natives don’t possess exactly the type of digital skills that are going to be increasingly required by businesses in the developing environment.
Who’s Going To Lead The Charge?
But who’s going to stand up and take responsibility for this? As Simon Devonshire writes on his blog:-
“If we believe it is Government, then exactly which Minister is accountable for the digital transformation of the economy? I don’t think we have one. I’m not aware that the Bank of England has anyone focused on understanding the economic impact of the internet, despite the UK’s lead of e-commerce as a percentage of GDP. Universities offer computer science education, but that is only one of the ingredients necessary to realise the digital opportunity”
We’re in real danger of losing ground here by sitting back and relying on others to make the necessary changes. Some progress is being made — for example, the decision to make coding (sort of) compulsory in UK schools from 2014. But at the same time, we can’t assume that the current system of education was developed many years ago with the best structure to deliver this. I’ve mentioned the quote on this blog that “65% of kids at school today will end up in jobs that have not yet been invented”. Parents could do worse than helping their kids to learn coding at an early age. Not sure where to start? Check out CodeAcademy or another option on this list of resources for inspiration.
But it will take more than this.
We Need A Culture of Curiosity
As the demand for workers with digital skills is exploding, we need to train the young to learn the skills that will make them employable over the coming years whilst working hard to fill existing skills gaps in businesses today with those who are currently desperate for a job.
“Part of what needs to be learned is how to learn, over and over again. Simply learning where the button is for ‘cut’ or ‘undo’ is not enough.”
To me it seems that it’s not necessarily about teaching the skills quite so much as ensuring that we each develop the curiosity that’s required to adopt a mentality where we’re all hungry to learn. Each and every day. And that responsibility falls squarely on each of our shoulders as individuals.
If you’ve got any thoughts, I’d be interested to hear them.
At first, the idea of discussing robots in the lavish and dated wooden and mirrored surroundings of The Famous Spiegeltent felt slightly surreal. Now I realise it was ideal. Why? The Spiegeltent was built in 1920 – exactly the same year that the word ‘robot’ was used for the first time ever in a play called ‘R.U.R. (Rossum’s Universal Robots)‘ by Karel Capek.
The word ‘robot’ in Capek’s native Czech means the forced labour that serfs were required to carry out on their master’s lands. Of course, in fiction, robots often appear as metaphors for human problems, whether slavery or racism. But as they become increasingly visible in society, will we end up teaching them such concepts as cruelty – or will they be capable of learning such flaws themselves? In essence, how human do we actually want our artificial intelligence to be?
The session was led by Subramanian Ramamoorthy, Lecturer in Robotics at Edinburgh University who gave his expert views on how far robots are already intertwined with our daily lives and how much further that’s likely to develop. A fascinating chat, here’s what I took away from the session:
When Will Robots Take Over The World?
Let’s cut to the chase and start with the million dollar question.
The answer? Not any time soon. I get the sense that it’s a question that researchers get asked way too often.There’s various reasons why robots actually taking over the world is unlikely but high up there on the list is the simple fact that there’s no logical reason that they’d want to. Even humans don’t seek world domination (well, most of us). And, even if they did change their minds, their batteries wouldn’t last (honestly).
Interestingly, many people seem to assume that robots will develop some malevolent intention as they evolve – perhaps a view that’s been heavily influenced by Hollywood (e.g. Skynet). Yet the reality is that most developments in robotics currently focus on assistive, rather than disruptive, technologies. The most obvious future uses of robotics involve helping humans to carry out manual and repetitive tasks (for example, cleaning cups) or remote exploration, for example.
Of course, there is always the possibility of unrest if robots displace vast numbers of workers. But in many ways, the more interesting question is how this technology could be applied to complement existing human roles. Consider how we currently search for a missing person for example. If there’s no trace found, it may be very difficult to justify the cost of a policeman searching a remote location for an extended period of time. But the cost/benefit analysis of asking a robot to carry out the task for an extended period of time may look entirely different.
For example, it’s not hard to see how any army will be able to make use of these (don’t worry, you’re not alone if you start to get a little creeped out by progress here):-
Robots In Space
Robots have been up in orbit for a while. But far from simply replicating fiction, it’s useful to understand why they’re actually required. Whilst an astronaut’s job might appear glamorous, the reality is that much of the daily routine is just that – a mass of repetitive boring jobs. I suspect few astronauts dreamed that the spectrum of tasks that they’d be required to carry out when pushing the boundaries of mankind would involve quite so many requests to empty the toilet on the International Space Station…
The question was asked why we seem to be focusing on building more humanoid robot shapes than purpose-built structures. It’s clear that having a cute wee fella that speaks to you like Dark the Robot (pictured above) on a stage brings a favourable response that gets people talking. It’s almost PR for the field as it entices people into learning more about the subject.
Those of us who live outwith the rarefied circles of AI/robotic research but within ready reach of great films appear to have an overly-optimistic assumption about the current rate of progress. Continued developments enable us to continually improve but the evolution of our robotic abilities still lags behind when compared to the development of a human child, for example. Progress is being made but it’s important to remember that in general, we’re still only able to teach robots how to carry out certain tasks with effort – we might have developed a robot that learned how to fold towels but it’s still taking 25 minutes per towel.
An Ethical Stramash?
Surprisingly not, for the most part. Despite only making incremental advances in the development of robotics, those involved in robotics apparently get asked questions about ethics frequently, way before any such issues could be faced. The reality is that, except in very specific areas (such as medical technology), researchers are still a long way from having to really tackle particularly taxing ethical problems.
So What Does The Future Hold?
Good question. Everything. And yet, many important limitations remain.
When you actually see a robot in the flesh (so to speak) as I did on Friday, I couldn’t help but be struck again by precisely how again complex they really are. OK, so we might laugh at their basic footballing skills, but the reality is that the work that’s taken place to get to that stage is incredible.
The reason that any robot ever moves is down to a complex combination of factors involving software programming and hardware – every joint contains a motor, that is activated by programming in combination the use of its other senses, such as vision (identifying colour and shape), touch sensors, accelerometers and the use of sonar, amongst others. Putting all that together so that it works as intended is no small task, to say the least.
One of the stated goals of the Robot World Cup is to evolve the technology so that a team of robot footballers can actually defeat the human World Cup winners by 2050. Is it likely? I don’t see why not when you take a look at the most recent robot from DARPA.
For now, it seems that the field is focused on building fundamentally better robots (physically) whilst improving the existing skills of interaction (via programming advances). We’re still a long way away from developing robots that are self-powered with the ability to repair themselves at will. But whatever the evidence to the contrary, I can’t help but think that this is another area where things are just going to accelerate in the future.
It’s a fascinating topic. I’d love to fast-forward twenty years and revisit this post again. But in the meantime, I’ll leave you with one thought.
Rapidly ageing population of the world – meet ASIMO.
65% of the kids at school today will end up in jobs that have not yet been invented
Accurate or not, there’s no doubt that those responsible for designing education systems are facing some real challenges. After all, how do you teach students effectively if you’ve got no idea of the skills they need to be successful?
With an uncertain future, the one thing that is clear is that there are opportunities for us to improve the system of education that’s in place and which hasn’t, in many cases, changed for decades. I’m coming across increasing numbers of articles online about the challenges – for example here and here. Now, I’m far from an expert in this area. But as a keen advocate of the disruptive potential of Massive Open Online Courses (or MOOC’s, as they’ve become known) (tip: check out Steve Blank’s excellent ‘How To Build A Startup’ course on Udacity) and the author of an MBA dissertation on open access publishing, I’m pretty certain that removing the barriers to learning that currently exist can only be a good thing.
Knewton And Adaptive Learning
When I was in Glasgow at the Digital 2013 conference last week, Tom Hall (@tomjhall) from Pearson introduced me to something that had – somehow – passed me by so far. Knewton is an adaptive learning platform that provides personalised educational content. It accumulates data from students as they move through the learning process and then uses that information to make the process more effective by personalising the experience for each individual.
The net result is that the curriculum adapts to the needs of each user directly and then pieces together the perfect individual bundle of content for each student. To put it simply: if it works well, more people will be more engaged and therefore successful at learning as they will be presented with the information (and subsequently tested on it) in a way that suits them, as opposed to the ‘one-size-fits-all’ mentality that we’ve been forced to adapt up until today.
Big Data And Education
To me, it sounds like a no-brainer. It’s a fascinating use of Big Data (there’s that buzz word again) that has the potential to create significant change for the better on a massive scale. What’s great is that the success should be organic: the more students that use the platform, the better the experience should become. In the same way that search engines benefit from critical mass, the more information there is, the more accurate the personalisation to each student. Check out the video below for a bit more information.
Losing The Human Touch?
Of course, not everyone agrees. After digging around a little further online for information, I came across a fascinating and incredibly detailed blog post by Phil Macrae, a Canadian ‘explorer in the field of education’. Please do give it a read for a far more insightful commentary on the area than I could ever provide here. It provides a considered warning to such tech-evangelists (…guilty) against the simple replacement of the human dimension of learning with a ‘teaching machine’.
I’m relatively new to this subject so I have to remain open to persuasion. But it’s clear that Knewton are making waves. So far, they’ve attracted the notice of some pretty heavyweight players, including Pearson and investment from the Founders’ Fund and they’ve been recognised as a Technology Pioneer by the World Economic Forum in Davos (following in the footsteps of companies like Twitter, Firefox and Paypal).
The Potential Of Personalisation
It’s hard to say what education will look like in the future. Information is increasingly become simple to store and retrieve, disrupting the traditional methods that have been employed for many years. So it’s easy to believe that, like so many other areas of real life, personalisation must come into play whilst technology continues to drive down the costs. And that – surely – can only be a good thing.
Regardless of whether it’s likely to succeed or doomed to failure, as both a bystander and a parent, I can only be in favour of Knewton and what they’re looking to achieve. Whether they are the ones who actually succeed, by going after such an audaciously large goal, or simply prove to be the first of a new wave of technologies however still remains to be seen.
Do you think that technology will ever replace the need for classroom-based teaching entirely? Or would relying on technology too heavily within a learning environment actually damage the process? I’d be interested to hear your comments below.
On the week that the first Explorer editions are being shipped to developers, I’m hardly alone in my excitement about just how important Google Glass could turn out to be. Not only for the applications that we can imagine here today, in April 2013. But more importantly for the potential that this type of technology brings for advancements across areas that we haven’t yet considered.
If you view it as a building block for the re-imagining of almost every daily activity, from work, sport or just basic methods of human communication, we can have no idea at this stage of how significant this next move into mobile computing/augmented reality will prove. However, I’m betting on it being a huge jump forwards.
I’m sure there will be issues with version 1 but we’ve got to be careful not to have unreasonable expectations. Bleeding edge products always lack the initial crucial customer feedback that can only come once you’ve let third parties loose on your product. And it’s precisely in that area, where people start to see how the technology could be used in their every day lives and make the necessary adaptions, that will drive a steep growth in its popularity.
…Or The Green Light For Conflict?
But putting the optimism to one side for a minute, it’s obvious that the path towards widespread adoption is not going to be straightforward. Moving past the geek-attraction phase (“ooh, it’s shiny, I want one of those…“), the technology unearths a whole raft of issues that will inevitably cause tension between different groups.
[a business with] a recent record of genuine innovation that stretches/defines social and behavioural norms with a strong revenue stream and deep enough pockets to have a fighting chance of medium to long-term success.
Privacy And The Invisible Impact
Positions are starting to be taken on either side of the privacy debate around Glass. Yet amongst such high profile posturing, few hold solid research about how the human condition will be affected, consciously or otherwise, when we become acutely aware of someone wearing technology which can record our every move. How many of us would think twice before making a statement in the future if we knew that it was to be recorded and retrievable by a company whose goal was to index that data for the purposes of serving ever-more relevant advertising to you? As Chipcase writes:
Any idiot can collect data. The real issue is how to collect data in such a way that meets both moral and legal obligations and still delivers some form of value.
An Argument For The Wider Public Good?
One way to ease the widespread adoption of Glass is to enable anyone to access on demand the video feed being recorded by others around them. Transparency of information will no doubt help ease a few concerns whilst crowd-sourcing views to make them collectively useful is likely to convince people of the wider public good in certain situations, with emergency situations or entertainment events being the most obvious.
Regardless, It’s Happening
The issues surrounding the introduction of Glass – whether in terms of privacy, the ownership of data, legislation or the evolution of basic body language in a social setting – are only just now starting to be considered. But I for one can’t wait to see how things move forwards. There are bound to be mistakes but progress demands failures along the road.
You may not agree with Ray Kurzweil et al about his predictions about the approaching singularity – the point when technology and humanity are will no longer be separate (current predictions point to 2040). But this looks very much to me like a significant jump forwards along that path. And, one way or another, whether in Google’s hands or elsewhere, it’s going to happen. And it’s going to be a helluva ride.
Tim O’Reilly is a guy that you should really pay attention to. He’s been a leading commentator around key technology areas such as publishing, Web 2.0, open data and the burgeoning Maker movement for a number of years. The organisation he founded, O’Reilly Media, lives by the mantra of ‘changing the world by spreading the knowledge of innovators’ and he’s viewed as something of a master at both identifying trends and amplifying them.
The importance of mobile computing is clear to anyone looking at website analytics. Yet we’re still very much in the early days of mobile and the change that’s coming is much more fundamental than simply a shift in the way that people access your website whilst on the move. Why? It’s all down to that piece of technology that you carry in your pocket which increasingly knows more – and better – information about you as an individual.
Why do applications like Foursquare or Runkeeper, for example, still need us to take an active part? Why do you have to check-in or click a button to tell your phone that you’ve started running? It already knows this information. There’s a revolution coming as businesses get built on the foundation of information that individuals don’t even have to go to the effort of submitting themselves. It’s all being done for them.
You’ve probably heard of Square. Set up by Jack Dorsey, founder of Twitter, it lets users accept credit card payments on their mobiles. But the clever thing is that if you have the App open on your mobile, you can walk into a shop that’s using it and the cash register already knows that you’re there. That connection’s already been made – it’s live and waiting, ready for you to use it.
SOFTWARE ABOVE THE LEVEL OF THE DEVICE
Too often, we’re still thinking about software as being something that lives inside a device. A good example is Linux. For many, it’s at best some kind of mysterious operating system that tech folk discuss and has no relevance to the laptop they use for work. Yet if you’re searching on the web, the chances are high that you’re using Google – which is powered by Linux.
Who doesn’t love the videos of skydiving prototypes of Google Glass? But whilst the current excitement (and concern) is currently focused on the consumer applications of this technology, once you start thinking about how these technologies could potentially impact workflows, a new picture emerges. Give people ready access to the indexed knowledge of mankind and it’s fairly easy to imagine how certain low-level jobs can be turned into high-level jobs. After all, why train for years to learn something when you can simply follow live instructions?
The phrase comes from a 1960 research paper by JCR Licklider that foresaw the development in cooperation between men and computers. The technology businesses (such as Google and Amazon) that survived the dot.com crash before moving on to further success did so, at least in part, because they worked out how to get their users to contribute to what they did. Take a bow, Web 2.0.
MORE (AND BETTER) DATA
Peter Norvig, Chief Scientist at Google once said: “We don’t have better algorithms. We just have more data”. Look at Google’s self-drive car. In 2005, the winner of the DARPA Grand Challenge, a competition for American driverless vehicles, drove 7 miles in 7 hours. Yet only six years later, Google has designed an autonomous car that has driven hundreds of thousands of miles in ordinary traffic. So what changed?
Google had access to the data behind Google Street View. Or, to put it another way, the recorded memory of humans who drove those roads, stored in a global brain.
CLOSE THE LOOP
Investor Chris Sacca has been quoted as saying “What I learned from Google is to only invest in things that close the loop”. That is an incredibly important principle for startups who should always be trying to discover the loops in the world that their business can close.
For example, Uber is a taxi business that connects individuals with luxury cars for hire. The app knows the location of both the passenger and the driver and makes the connection so that you always know precisely where the vehicle is. Uber has closed the loop.
Think of just how powerful the business becomes when combined with the rating mechanism that I mentioned in an earlier post has been integrated. A relatively small change now has the potential to completely disrupt the traditional regulation of taxi cab services. No longer does a cab driver just need to be trained and certified. In the modern world, he or she must also display social validation in the form of positive customer feedback – a bleak future no doubt for those drivers who drive carelessly, treat customers rudely or even play music too loudly in their vehicle.
CREATE MORE VALUE THAN YOU CAPTURE
O’Reilly argues that the concept of a business that exists solely for the purpose of making money for its shareholders is fundamentally flawed. Every business has an obligation to create value.
Current high-profile tech businesses (see Etsy, airbnb and Kickstarter) are successful precisely because they’ve focused on building an economy around their business. It is not simply about making money for themselves – they want other people to succeed on the back of what they’ve created in building an ecosystem.
What we fight with is so small that when we win it makes us small; what we want is to be defeated decisively by successively greater beings.
Find hard problems. Take the example of a guy who quit a well-paid role with a hedge fund to work for a high-altitude (and high-risk) wind energy company. When asked why, he had one simple answer. He’d wanted to work for the startup was because ‘the math is harder’.
People who want to work on a hard problem are the types of individuals you want with you in a startup. If you can get people to work on things that matter and inspire, it will carry far more weight than being driven by simple monetary gain.
JOIN THE DOTS
So – how hard can it be?
Simplify, move to the cloud, automate, enhance intelligence, collect better data, help other people succeed and set goals worthy of your efforts.
There has to be a business idea or two in there, don’t you think?
If you held a gun to my head and forced me to make a prediction about the shape of our future society, the one forecast that I could make today with absolute certainty is that I’d end up rambling on about the possible impact of technology until you were either bored or agreed. However, it’s a safe bet that almost every issue duking it out at the top of my list would relate in some way to the increasing impact of the deepening networks between people, objects and ‘big data’.
The Growth Of Networks
The increased power of connectivity has been most visible to most in the explosive growth of social media. Most of us are gradually realising that individuals are increasingly being tracked whilst simultaneously being presented with greater opportunities to make a significant impact by presenting themselves favourably online. But to my mind, simply taking offline communication and rehousing it on a digital platform has in many ways simply been a diversion so far (albeit incredibly an powerful, and essential, one).
The chances are good that you will have heard the growing buzz around the ‘Internet of Things‘. If the phrase at least is new to you, here’s my (very) simple explanation. The internet to date has relied on humans to input the necessary data. But, as Kevin Ashton, the RFID pioneer once wrote, “The problem is that people have limited time, attention and accuracy”. The answer? Let the computers find out everything that they need to know about things in the real world by gathering data directly without the input of humans. Let them then use this increased and continual tracking for the benefit of all, whether it be to reduce waste, loss and cost.
(Aside: I’ve just realised that I’ve used the word ‘let’ above twice, unintentionally implying that computers are somehow sentient beings… that’s another post entirely …)
Connections Are Everything…
It’s not a new idea but it’s gathering pace like never before. Millions of new devices are regularly being connected to the internet and the potential for advancement goes far beyond simply having a Robonaut sending tweets from space.
Let’s just think about this for a second. Connecting things that were previously silent within a network of networks with billions (or trillions) of connections on a global scale? The potential impact of this is so significant that it is guaranteed to eclipse even the massive disruption that the internet has already caused within the worlds of communication, education, business, entertainment and simple information retrieval amongst others.
Currently, we mostly just connect to the internet via devices and social networks. In the future, people will connect in far more flexible ways – think of the implications for healthcare where a pill or clothing sensors will report back on your vital signs directly to your doctor via a secure internet connection.
We’re used to devices gathering and transmitting data somewhere for analysis. Increasingly, the ‘things’ will combine and analyse that raw data before transmitting far more valuable information, which in turn enables us to make faster, more intelligent decisions.
Sensors and devices will be able to sense more data and increasingly understand the possible context in which that data can be useful.
Critically, processes are still to develop which will add value – effectively working to ensure that the right information is delivered to the right person at the right time in the appropriate way.
Clearly, whilst we might initially moan at the loss of sleep, few of us would refuse an alarm clock that wakes you up ten minutes earlier on a morning when the traffic is congested. We may even be increasingly interested in being able to track our groceries from field to table, if only to guarantee that our meat hasn’t had a past career in racing.
But once we start to apply this understanding at a greater scale, we have the potential to discover a means by which we can monitor, understand and manage our environment more effectively. Saving energy, regulating and distributing agricultural output and helping to provide access to clean water starts to tick a lot of boxes which bluntly need massive action given the current challenges that we are all facing.
But let’s not be naive. It’s not going to be plain-sailing all the way home in our automated cars to our self-filling fridges. If developments continue – and there’s no reason to think that we will face anything other than developments occurring at a rapidly accelerating pace – we could soon be facing challenges in relation to security and privacy that could place us in a very dark place indeed within society.
Governments, businesses, organisations and individuals need to get involved – and quickly. Not to slow it down or even simply to prevent any mistakes being made – there will always be wrong turnings in any attempt to innovate. But to ensure that we use the unprecedented opportunities that we will soon be faced with for the most effective purposes for us all – in business, in our communities and on a global scale.
Think it’s all pie in the sky? Or are we walking blindly into disaster by arming inanimate objects with the capacity for (limited) intelligence? There’s a sci-fi plot or two in there, that’s for sure. I’d love to hear from you in the comments below if you have any thoughts about future developments.
The founder of GigaOm is known for his incisive writing in the tech field but this post caught my attention in particular, as it flags up just how far-reaching the potential impact of increased connectivity could be as it spreads across all levels of society.
Om’s thoughts are provoked by a recent story involving Uber, the US-based startup that lets customers book private cars with drivers via a mobile app. It’s a hot startup in the US and for good reason: it’s genuinely disruptive.
The company hit the news recently when a number of drivers were ‘let go’ as a result of customer feedback. To be clear, the drivers are self employed and following each trip, both passenger and driver have the opportunity to rate the other. Losing your livelihood because you failed to deliver your side of a deal is hardly new. But the concept of a company acting on the basis of unvalidated low ratings does introduce a new dynamic into the equation.
THE QUANTIFIED SOCIETY
We’re all increasingly happy (and in some cases incentivised) to rank, rate and share our experiences in the digital world. This means that we’re starting to build a ‘Quantified Society’, where each individual is being assessed and scored. Fine. But who judges what is a good (or bad) score and how does the law back that decision up?
For example, what is the magic customer score at which you can ‘sack’ a worker fairly? And, however it might be assessed in the future, what can an employer do if the score that represents an individual’s reputation takes a hit? The rules of society are going to have to try to keep up with the allocation of points when the rules are unclear.
You also have to wonder how individuals might modify their own behaviours as they become increasingly aware of what the potential impact might be of the scores that they allocate. Will feedback be restrained for fear of someone losing their job? Or, instead, will people hide behind their perceived online anonymity in order to criticise aggressively simply because they can?
AND THE FINAL SCORES ARE…
It might be the future and driven by technological advancement but, as Om writes, the range of challenges within a Quantified Society are likely to be “less technical and more legislative, political and philosophical”.
The human race evolved over time to reward the fittest with survival. Society then developed to ensure that other more desirable attributes were sufficiently rewarded. We’re now moving into the next stage however which introduces another form of competition. The only thing that’s clear at the moment is that the rules haven’t been set – and are unlikely to be for a long time yet.