Anyone who has looked at my LinkedIn profile, or my twitter account, knows I live in New England. We’ve been whacked quite thoroughly with multiple snowstorms this winter. So it’s a given that the roads are covered with snow and ice.
One day my car “dinged” at me, flashing a signal to tell me that the roads were slippery. No, really?? Yet that got me thinking: do I really need a machine to tell me what my senses were already telling me? What happened to human learning and experience?
Negative Views of AI
As artificial intelligence (AI) moves forward, there is a growing chorus of concerns. Notables such as Elon Musk, Bill Gates, and Stephen Hawking have voiced worries. Googling the phrase dangers of artificial intelligence produces a plethora of articles. And while I think that these articles might be a bit more doom-and-gloom than reality will probably show, there are reasons to worry.
I’d like to focus on one particular risk that, while others have mentioned it, I personally believe is one of the long-term larger threats to humanity in a world with a massive AI presence. I’ll leave the other scenarios, like AI-driven autonomous war machines that turn on us Terminator-style, to other writers; when AI robots can design and construct robot factories from scratch without people, building new robots from raw ore and controlling the entire stream from raw materials to final machines – worry. (Let’s leave aside the danger of hacking; it’s my firm belief that if a computer system of any type is “connected” then that connection, by definition, makes it vulnerable with enough effort.)
Even mere automated systems, nowhere near true AI, can be an enormous boon for safety – but they have a downside: they encourage complacency. I have known people who have “automatic off” headlights to, when borrowing a car, leave the lights on by habit and thus drain the battery. I can only imagine the complacency from collision avoidance auto-braking systems; knowing that they’re there will lessen peoples’ own caution, even if only subconsciously. Similarly, there are several cars out there whose drivers’ openly-voiced confidence in anti-lock braking technology caused them to drive more aggressively given bad conditions – resulting in their passenger seats having my fingernail imprints as permanent markings as I held on for dear life.
Even in less critical applications people get complacent; for example, autocorrect is both a bane and boon – and the subject of many humorous screen captures of assumptions on the part of the program. But it breeds sloppiness, assuming that the computer is going to automatically fix any typos, let alone do so correctly. And let’s not get into spellchecker.
Skill with a Drill, and More
So I was at Lowes when I saw a power drill with a sensor that detects when the screw you are driving is drilled in flat, and stops automatically. Being someone who, very often, makes mental jumps even at everyday things, my immediate thought was “so much for human skill”.
Interestingly, in one of the essays I referred to here, specifically Businesses Moving Too Quickly to Robots? Will 1 in 3 Jobs Vanish by 2025?, had a very similar concern (bolding added, italics in original):
The problem, and we see it with pilots and doctors, is when the computer fails, when either the technology breaks down, or the computer comes up against some situation that it hasn’t been programmed to handle, then the human being has to jump back in take control, and too often we have allowed the human expert skills to get rusty and their situational awareness to fade away and so they make mistakes.
In a discussion of pilotless airplanes, the interviewee expresses enormous trust in technology and faith in the machines, to wit (bolding added):
Planes are mostly already flown on autopilot already. But in terms of a transition, you can do it the way Google has done it—have it be self-flying but have someone there just in case of emergency. But how many years of data would be needed? When you look at airplane crashes, it’s not equipment failure. It’s human error.
Implicit in the above quote is blind faith in the programmers’ ability to anticipate everything; remember, programmers program based – in part – on interviews of users. Could a programmer anticipate a bird ingestion into an engine, as happened in 2009 resulting in Captain Sullenberger managing to land the plane in the Hudson River – with no loss of life… and anticipate it to the degree of confidently programming a computer to handle every possible variation? Or the crash in Sioux City where a turbine blade fractured, cutting through the hydraulic lines and causing catastrophic system failures. On the Sioux City flight it was only the experience of the pilots, plus another pilot traveling as a passenger, who on-the-fly tried something they’d read about theoretically: using varying thrusts from the engines to steer and control the plane in an improvised control system to save over half the passengers.
This not to say, of course, that humans in situations are perfect – of course they are not. But in a crisis situation where “gut feel” and “survival instinct” are as important as data and raw processing speed, I’ll take a trained person over a computer any time.
Broad Replacement of Human Knowledge and Judgment
In the hiring process games and other electronic gizmos, not to mention personality tests and now even voice analysis, are now used as one part of vetting candidates. In theory, this can provide valuable information into the strengths and weaknesses of potential employees. In practice, human judgment about the suitability of a candidate to make a final decision is often deferred to the program, game, or test. And here is an article containing a key paragraph from How useful is pre-employment personality testing?: (bolding added, link in original and well worth reading too):
Another problem with using personality tests is disempowerment. When there’s a test to fall back on, managers inevitably step back from responsibility and surrender to the test, instead of asking the tougher questions. Like “the claw” in Toy Story, the test [rather than the person] “decides who will stay and who will go.”
An engineering intern once asked me if I ever used a lot of the problem-solving math that is fundamental to an engineering education. My answer was along the lines of “Not as often as you’d think, but understanding these basics is essential to understand how the equations were derived and used; after all, an equation is a model of a physical situation.” So, look at this. This is not learning, this is a ceding of ability from mind to machine. And the effects are starting to show: Millennials in America are atrocious at math — and not great at reading, either. (And don’t get me started on the twitter generation, forgoing complex and nuanced thoughts for 140-character thought-bites.)
From auto-filled email addresses, to stored phone numbers, to recipe creation*, to remembered passwords, to the coming “Internet of Things” that can detect when you’re out of milk, eggs, or other items and very soon even order them for you for drone delivery (potentially without even checking with you first) – one after another basic human tasks are being offloaded onto chips.
Still Just a Plains-Ape
For all our knowledge and technology, we are – biologically – still a plains-ape that happened to develop extraordinarily large brains and walk upright, thus freeing our hands for other things. Yet our biology constrains us. Our muscles need exercise to avoid atrophy; in answer to our lack of activity from our modern society, we have created exercise clubs and all sorts of home exercise equipment and videos. Our minds similarly need stimulation and activity to avoid senility and dementia. Having surrendered so much mental activity from routine tasks, we now have electronics to recreate that mental exercise – Lumosity comes to mind specifically. We pay money to slough off mental effort onto outside devices, and then pay more money for tools to keep us sharp. What have we really gained in the process?
The Real Threat of AI
In the movie The Matrix, an AI program named “Agent Smith” (played by Hugo Weaving) is interrogating Morpheus (played by Laurence Fishburne), one of the leaders of the human resistance, in an attempt to get the access codes for the last remaining human city in a world taken over by intelligent machines.
In his discourse, Agent Smith lets out a very interesting observation about the supplanting of humans by machines as the top entity on the planet: “ … as soon as we started thinking for you, it really became our civilization… .”
So the threat of AI, I believe, not only encompasses the dangers voiced by others, but one of a slow slide into intellectual decline as we push more and more of our core responsibilities onto machines driving towards a purely-theoretical technological paradise. In this push, what we’re really doing is creating more and more free time to play and thus escape the need to think.
Which reminds me of another quote from science fiction, where Star Trek’s Captain Kirk observes: “Maybe we weren’t meant for paradise. Maybe we were meant to march to the sound of drums…”
Pardon me while I practice my drums.
© 2015, David Hunt, PE
* I am the family cook. My wife can cook, but I enjoy it. Some of my best creations have come from walking down aisles in the supermarket, seeing one thing, thinking “To use this I also need X, Y, and Z…”. That’s half the fun. Even my cookbooks contain annotations on changes that I’ve made. And occasionally I just peer in the pantry and fridge to cobble something together, usually with good results (albeit not always!). No computer can ever replace that adaptability.