Maybe it’s the Trump effect and the focus on job creation or it’s the resurrection of jobs (once lost to technology) and the obvious cynicism that predicates a return in mass numbers to an old state.
Perhaps it’s the financial analysts touting AI (artificial intelligence) as the new frontier in order to create the next generation of oversized valuations based on future promise and endless hype.
Or maybe its time has just come and, in the natural progression of technology, application and acceptance, we need to move on with reality.
Not meaning to be esoteric, I am talking about robots.
Not just the Star Wars R2-D2 and other celebrity fictional icons and not limited to advanced parlor shows like Watson (calm down KJers, just being provocative) but the hard-working, down-and-dirty, doing-real-jobs, people-serving, cost-cutting, changing-the-workplace robots that are evolving, have been creeping up on us for years and are increasingly more efficient, seamless and useful, albeit some are downright creepy…
Do the search thing and you will see what I have noticed: The preponderance of robot-related stories lately and the connection to all the key issues of our time, from President Trump to AI to security to health to the workforce and on and on.
All of which means that the discussion is not around people replacing embedded and evolving automation, but must be around the actual implications…not of what’s possible…as frankly anything is…but around what makes sense for society, for people, for life.
This is a People First issue and my fear is that the Digibabble debate could put us on a ruinous track.
Simon Kuper recently wrote in The Financial Times:
Most historians now believe that kings and presidents have relatively little impact on ordinary people’s lives. Demography, climate and technology matter much more. But the Great Man Theory of History lives on in journalism. Read the news these days and you’d think that Donald Trump and his entourage were the key forces shaping the future…
The debate about the future is currently monopolised by the unanswerable question of whether robots will take away human jobs. But so much else will happen…
Trump won’t create this future. He may imagine himself as the rider guiding the horse of history. In truth, he’s hanging on to the horse for dear life as it carries him in directions he never imagined.
In truth and fairness, we are all hanging on for the ride—and perhaps enjoying the sheer exhilaration of it all as things we only imagined, hoped for, read about or even dreamed of become reality.
Let’s be clear, despite our Digibabble arrogance, we are the inheritors of centuries of robot dreaming and planning…of thousands of years of surety that they were a fact.
From 600 BC onward, legends of talking bronze and clay statues coming to life have been a regular occurrence in the works of classical authors such as Homer, Plato, Pindar, Tacitus, and Pliny. In Book 18 of the Iliad, Hephaestus the god of all mechanical arts, was assisted by two moving female statues made from gold – “living young damsels, filled with minds and wisdoms”. Another legend has Hephaestus being commanded by Zeus to create the first woman, Pandora, out of clay. The myth of Pygmalion, king of Cyprus, tells of a lonely man who sculpted his ideal woman from ivory, Galatea, and promptly fell in love with her after the goddess Aphrodite brought her to life.- wiki
“Filled with minds and wisdoms”…not just route mechanics…but AI even in Homer’s time.
And if, like me, you find 1001 Nights (Arabian Nights) fascinating, then you know that this amazing compilation of stories, collected and compiled over centuries, is a treasure trove of early science fiction…some of it uncanny…with an extra helping of robot stories as well.
Bottom line, robots have been with us as long as we have had imaginations. And while it makes great copy and fodder for analysts and consultants, the discussion of what jobs they do or don’t take seems to me to be a rather moot point as orders are up for commercial/industrial robots all over the world. From Fortune:
Last year was a good one for the robotics industry. North American businesses ordered 35,000 robots in 2016, a 10% increase from 2015, according to a report on Tuesday by trade organization Robotic Industries Association. Meanwhile, sales on those orders in the region reached an all-time high of $1.9 billion last year, beating the previous record set in 2015 of $1.8 billion…
The automobile industry, which took shipment of over 20,000 robots and their components in 2016, is partly driving the boom. The food and consumer goods industries, electronics, plastics, and life sciences were also big customers…
Although robot prices have declined over the years, the main reason companies want robots is to better compete with each other on speed and productivity, Burnstein said. Technological advances that allow robots to better track their location in warehouses and avoid injuring humans who work alongside them are also playing large roles in the boom.
Still, North American countries are behind others in adopting robots. The Robotic Industries Association does not track robotic orders or shipments outside of North America, but Burnstein said “China is the world’s fastest growing robot user” and that European companies are also on the rise.
America Firsters take note….
And the question of how fast is equally moot as most either go out on a limb as reported by The Telegraph :
Robots will have taken over most jobs within 30 years leaving humanity facing its ‘biggest challenge ever’ to find meaning in life when work is no longer necessary, according to experts.
Oxford University researchers have estimated that 47 percent of U.S. jobs could be automated within the next two decades. And if even half that number is closer to the mark, workers are in for a rude awakening.
Or hedge their bets, as reported by The New York Times:
Globally, the McKinsey researchers calculated that 49 percent of time spent on work activities could be automated with “currently demonstrated technology” either already in the marketplace or being developed in labs. That, the report says, translates into $15.8 trillion in wages and the equivalent of 1.1 billion workers worldwide. But only 5 percent of jobs can be entirely automated.
“This is going to take decades,” said James Manyika, a director of the institute and an author of the report. “How automation affects employment will not be decided simply by what is technically feasible, which is what technologists tend to focus on.”…
Throughout history, times of rapid technological progress have stoked fears of jobs losses. More than 80 years ago, the renowned English economist John Maynard Keynes warned of a “new disease” of “technological unemployment.”
As a true believer, none of the above really interests me. I was reading about and dreaming about robots as long as I can remember, and as any student of history can tell you every technological advance, throughout history, has brought change to the work force and will continue to do so.
The real question. The only question is what happens to us…Humankind. Here are two views from Liu Cixin for The New York Times:
In the dystopian scenario, as jobless numbers rise across the globe, our societies sink into prolonged turmoil. The world could be engulfed by endless conflicts between those who control the A.I. and the rest of us. The technocratic 10 percent could end up living in a gated community with armed robot guards.
There is a second, utopian scenario, where we’ve anticipated these changes and come up with solutions beforehand. Those in political power have planned a smoother, gentler transition, perhaps using A.I. to help them anticipate and modulate the strife. At the end of it, almost all of us live on social welfare.
Neither scenario is ideal and both lead to conflict of the worst order. In fact, if you read through it, both end up with an old European-style aristocracy prancing around in whatever the equivalent of powdered wigs is in the future with the masses huddled and ripe for revolt.
So, where does that leave us?
In his famous 1950 essay, Alan Turing proposed a test for an artificial general intelligence: a computer that could, over the course of five minutes of text exchange, successfully deceive a real human interlocutor.
And perhaps that is where Google got the inspiration to study the following…see if you’re as chilled as I was…not just about AI and robots, but about the humans who program them—as in us. From IFl Science:
It’s looking increasingly likely that artificial intelligence (AI) will be the harbinger of the next technological revolution. When it develops to the point wherein it is able to learn, think, and even “feel” without the input of a human – a truly “smart” AI – then everything we know will change, almost overnight.
That’s why it’s so interesting to keep track of major milestones in the development of AIs that exist today, including that of Google’s DeepMind neural network. It’s already besting humanity in the gaming world, and a new in-house study reveals that Google is decidedly unsure whether or not the AI tends to prefer cooperative behaviors over aggressive, competitive ones.
A team of Google acolytes set up two relatively simple scenarios in which to test whether neural networks are more likely to work together or destroy each other when faced with a resource problem. The first situation, entitled “Gathering”, involved two versions of DeepMind – Red and Blue – being given the task of harvesting green “apples” from within a confined space.
This wasn’t just a rush to the finish line, though. Red and Blue were armed with lasers that they could use to shoot and temporarily disable their opponent at any time. This gave them two basic options: horde all the apples themselves or allow each other to have a roughly equal amount.
Running the simulation thousands of times, Google found that DeepMind was very peaceful and cooperative when there were plenty of apples to go around. The less apples there were, however, the more likely Red or Blue were to attack and disable the other – a situation that pretty much resembles real life for most animals, including humans.
Perhaps more significantly, smaller and “less intelligent” neural networks were likely to be more cooperative throughout. More intricate, larger networks, though, tended to favor betrayal and selfishness throughout the experiments.
In the second scenario, called “Wolfpack”, Red and Blue were asked to hunt down a nondescript form of “prey”. They could try to catch it separately, but it was more beneficial for them if they tried to catch it together – it’s easier, after all, to corner something if there’s more than one of you.
So what do these two simple versions of the Prisoner’s Dilemma ultimately tell us? DeepMind knows that to hunt down a target, cooperation is better, but when resources are scarce, sometimes betrayal works well.
But please don’t be surprised.
Mary Shelley understood this—in fact, maybe even better, as she put the onus on the programmers—and addresses the issue in Frankenstein: “I am malicious because I am miserable. Am I not shunned and hated by all mankind?” The creature goes on to explain how his kind gestures were repaid with beatings and gunshot wounds by the people he tried to serve.
Sometimes betrayal works…
All of which brought the legendary Isaac Asimov to address this issue back in the 1940s in his Three Laws of Robotics that I have quoted many times before:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by a human being except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second law.
If you have stuck with me all the way, I hope I have given you some food for thought.
The obvious is clear. The question is how do we change the future now, understanding that it’s happening all around us.
For me, I just can’t and won’t accept this:
“Unless mankind redesigns itself by changing our DNA through altering our genetic makeup, computer-generated robots will take over our world.”- Stephen Hawking
What’s the point? I have no desire to be an Android that Dreams….
Because here is my deepest fear:
“The danger of the past was that men became slaves. The danger of the future is that men may become robots.” – Erich Fromm
And I’m not sure there is that big a difference…which is a story for another time.