It sometimes seems as if we’re abdicating more and more of the control over our everyday lives and handing it over to computers. Machines are analyzing the data and making many of the decisions that used to be the province of human beings. In some cases, this has had very positive results; in others, not so much.
A study conducted by Oxford University about a year ago said that almost half of the jobs currently occupied by people in the U.S. could be performed by computers. The combination of computers and robotics has already infiltrated many fields, with technology taking the place of pharmacists, legal researchers and document reviewers, reporters and more.
All of us (especially moms) can rejoice as unmanned computerized aircraft and tanks are slowly beginning to replace soldiers on the front lines of the battlefield. Personally, I’d prefer to risk losing a hundred MAARS systems rather than place young members of the military in the line of fire.
Some of us might be a little less comfortable, though, with the idea of artificial intelligence taking over the task of making medical diagnoses. Most of us have seen (or been the victim of) some sort of computer error enough times to be wary about entrusting our healthcare decisions to a machine – even if some do believe IBM’s Watson to be the best doctor in the world.
Of course, the place computers have really gained ground is in regard to jobs that are a bit less life-or-death than those of soldiers and physicians, such as the replacement of grocery store clerks with self-service machines. Even there, though, there has been plenty of resistance – to the point where some of the stores I frequent that had installed self-service aisles have taken them out. Last year, Costco completely eliminated their self-checkouts.
At those that still have them, I often see a long line backed up at the lone “real live human being” checkout while the self-service machines sit unused. And it’s no wonder. It seems as if every other time I use one, the machine gets confused and a real person has to come fix it before I can finish and pay. Of course, some folks love the machines. A study last month by NCR (which of course could be a tad biased, considering the company is the world’s largest maker of self-checkout technology) found that consumers “widely value” the DIY lanes.
Regardless of our preferences when it comes to buying milk and bread, many of us put our lives into the hands of computers on a routine basis. Everyone knows (though we may put it out of our minds when we’re in the air) that computers fly the planes now; except for takeoff and landing (and sometimes even then), the pilot is basically there in case something goes wrong. Nonetheless, it’s unlikely that commercial passenger planes will be running with empty cockpits anytime in the near future. There are too many things that can go wrong – just as a failure of the computer system or the entire electrical system – and the consequences in those emergencies that require human intervention are too severe.
Many pilots will tell you that autopilot frees them from the routine part of flying so they can maintain better situational awareness, in the same way automatic transmissions in cars can allow drivers to pay more attention to the road and less to shifting gears. However, there have also been studies indicating that pilots’ over-reliance on autopilot systems can actually cause them to make more mistakes and put passengers in more danger. Pilots who become dependent on the autopilot system may find their hand-flying skills getting rusty.
Now, with the advent of driverless cars on the horizon, the concept is coming a little closer to home. Whether we like it or not, cars that do the driving for us are inevitable. We already have cars that can park themselves, or automatically brake when they detect they’re about to run into something. Driving itself is the next logical step. But what are the pros and cons, and how far will this abdication of critical decision-making go?
In July of this year, the U.K. announced that driverless cars will be allowed on the public roads beginning in January of 2015. Some states in the U.S. (California, Nevada and Florida) have approved tests of driverless vehicles. Google has been at the forefront of automated automobile technology and last spring, they said the best solution is to go further than aircraft autopilot does and take the driver out of the equation completely, building vehicles with no steering wheel, gas and brake pedals or gear shift.
Cars that drive themselves present an attractive proposition. Just imagine how much more you could get done on the hour-long commute to work or the vacation road trip if you didn’t have to bother about driving. You could read, chat freely with friends, hang out on social networks, eat, do your hair, sleep, even have a drink or two – all those things that some people dangerously attempt to do now while driving.
You’ll have all the advantages of public transportation within the privacy of your own car (or more accurately, it will be like having a private chauffeur). Even better than relieving you of the driving task, though, is the fact that it would (at least in theory) keep all those lousy, inattentive or drunk drivers out there from putting all the rest of us in danger every time we venture onto the streets.
Yeah, but … There’s always a “yeah, but,” isn’t there? If our cars start driving themselves, conveniently at the same time they’re becoming connected to the Internet of Things, there’s a potential there for some frightening scenarios. We’ve all seen the sci-fi movies: The government will not only know where you are all the time, but will also be able to take over control of your car (probably lock your doors, too, so you can’t get out) and take you wherever “they” want.
Even assuming those in authority are completely benevolent and don’t abuse the technology, there’s always the risk that a mischievous hacker will send you careening into a concrete abutment just for the fun of it. Or more serious cybercriminals could track the movements of the wealthy, make their cars stop in a deserted area and target them for robbery.
Leaving aside such dire scenarios, what about the legal implications? Who do you sue if another vehicle broadsides you and leaves you paralyzed? If there’s no driver responsible for the action, can you sue Google (or whomever the software maker is)? What will that do to the driverless car market? And how will this affect the insurance industry?
Despite the complications, I think most of us will eventually embrace the concept. As with all new technology, it’s the transition period that will be difficult. Many people aren’t going to trust driverless cars for a while – and they may indeed pose a threat before all the bugs are worked out. When automobiles were first allowed on the roads, they posed a threat to horseback riders and horse-drawn carriages, too. They were banned in some areas. But gradually, they were integrated into the existing transportation system and eventually took it over. That’s what I predict will happen with driverless cars.
I’m one of the biggest control freaks around, and I have plenty of qualms about letting the car do the driving for me. I’m concerned about the possibility of a remote hostile takeover. On the other hand, I see some of the crazy drivers on the streets with me and I wish a computer was handling the driving for them. I also wouldn’t mind being able to write my articles or catch a few extra winks during the journey instead of having my eyes glued to the road every second.
I can foresee a day when parents will have to explain to their children the antiquated meaning behind the Beatles’ song “Baby, you can drive my car.”