The point is not that making a world to accommodate oneself is bad, but that when one has as much power over the rest of the world as the tech sector does, over folks who don’t naturally share its worldview, then there is a risk of a strange imbalance. The tech world is predominantly male—very much so. Testosterone combined with a drive to eliminate as much interaction with real humans as possible—do the math, and there’s the future.
We’ve gotten used to service personnel and staff who have no interest or participation in the businesses where they work. They have no incentive to make the products or the services better. This is a long legacy of the assembly line, standardising, franchising and other practices that increase efficiency and lower costs. It’s a small step then from a worker that doesn’t care to a robot. To consumers, it doesn’t seem like a big loss.
Those who oversee the AI and robots will, not coincidentally, make a lot of money as this trend towards less human interaction continues and accelerates—as many of the products produced above are hugely and addictively convenient. Google, Facebook and other companies are powerful and yes, innovative, but the innovation curiously seems to have had an invisible trajectory. Our imaginations are constrained by who and what we are. We are biased in our drives, which in some ways is good, but maybe some diversity in what influences the world might be reasonable and may be beneficial to all.
To repeat what I wrote above—humans are capricious, erratic, emotional, irrational and biased in what sometimes seem like counterproductive ways. I’d argue that though those might seem like liabilities, many of those attributes actually work in our favor. Many of our emotional responses have evolved over millennia, and they are based on the probability that our responses, often prodded by an emotion, will more likely than not offer the best way to deal with a situation.
Neuroscientist Antonio Damasio wrote about a patient he called Elliot, who had damage to his frontal lobe that made him unemotional. In all other respects he was fine—intelligent, healthy—but emotionally he was Spock. Elliot couldn’t make decisions. He’d waffle endlessly over details. Damasio concluded that though we think decision-making is rational and machinelike, it’s our emotions that enable us to actually decide.
With humans being somewhat unpredictable (well, until an algorithm completely removes that illusion), we get the benefit of surprises, happy accidents and unexpected connections and intuitions. Interaction, cooperation and collaboration with others multiplies those opportunities.
We’re a social species—we benefit from passing discoveries on, and we benefit from our tendency to cooperate to achieve what we cannot alone. In his book, Sapiens, Yuval Harari claims this is what allowed us to be so successful. He also claims that this cooperation was often facilitated by a possibility to believe in “fictions” such as nations, money, religions and legal institutions. Machines don’t believe in fictions, or not yet anyway. That’s not to say they won’t surpass us, but if machines are designed to be mainly self-interested, they may hit a roadblock. If less human interaction enables us to forget how to cooperate, then we lose our advantage.
Our random accidents and odd behaviors are fun—they make life enjoyable. I’m wondering what we’re left with when there are fewer and fewer human interactions. Remove humans from the equation and we are less complete as people or as a society. “We” do not exist as isolated individuals—we as individuals are inhabitants of networks, we are relationships. That is how we prosper and thrive.
http://davidbyrne.com/journal/eliminating-the-human
Voting 0