Take notice, Guys and Gals – whether we like it or not, the robot overlords are coming, and Stephen Hawking says we should be considering how to welcome them safely now.
You can read the main article here.
What makes this so interesting? Is there anything new here? I’m not sure.
That’s not important. What is important is that he’s right, and maybe he, being who he is, can get the ball rolling. The singularity is coming, folks.
Probably everyone reading this knows all about it, has read I, Robot by Asimov, seen Battlestar Galactica.
The three laws of Robotics
- A robot may not injure a human being or, through inaction, allow a human being to come to harm
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This is the stuff of great science fiction: robot overlords, terminators, HAL 9000. It’s not just science fiction, though. We’ve got a long way to go, but we’re are actively developing technology that will have access to the sum total of human knowledge. Of course, knowledge and understanding are two very different things; it’s the difference between being able to detect the visual edges of a book and finding it’s contents pleasurable. Still, much effort is also being sunk into reducing the inherent limitation of computers. Today, maybe we can make something that has the comprehension of a cockroach, but who knows what we’ll be making in fifty years. Maybe by then we’ll be making things that will be working at giving themselves a better understanding.
For me, it comes down to this: we can’t even agree on what it means to be ethical; what guarantee do we have that what we create won’t think differently altogether? We had better take it slow, or the futurists will be right – we don’t know what it will be like on the other side of the singularity.