Asimovian 21st century robotics.
High-Level cognitive functions in robots, as envisioned in Asimovs books.
Most of Asimov's robot short stories are set in the first age of robotics
and space exploration.
A recurring theme is the
Three Laws of Robotics
hardwired in a robot's (positronic) brain.
All robots in his fiction must obey these laws,
which ensure that a robot does not turn against its creators.
But the underlining question behind it all is of course: How intelligent will these 21st century robots
How human-like? Will the coming centuries see things like: a)
Robot Love. Robots capable of love
capable of passionately killing for love? b)
Intelligent robots that
use force to realize their plans
for the future. A Pax Robotica?
Robots motivated by built-in human urges, such as sex?
Robots that are sophisticated
enough to use all the dimensions of power to get what they want?
Asimov certainly had opinions about it all. Opinions that will
be explored below:
1. Roboticide and Robot love.
2. Pax Robotica.
3. Robot Sex.
4. The 3 dimensions of power. Robot power and Asimov.
Surely, future autonomous and intelligent robots will make their own plans.
The interesting question is, of course, how big
these plans will be
and how forcefully the robots will pursue their plans.
In Asimovs world it is a certainty that the robots
will indeed, eventually,
make very big
plans. And it is a certainty that the robots will carry out these
plans effectively and vigilantly. Sure, the three laws of robotics
are then supposed
to make sure that everything is always to the benefit of humans.
But even in Asimovs fictional universe there can be problems with the laws.
Unsuspected consequences that makes the robot plans - the Pax Robotica
dangerous for humans.
Uncertainty, that make readers wonder whether autonomous, super intelligent robots
such good friends after all? Even when the robots are guided
by the three laws
gives us his
ideas on where and how the three laws of robotics
will fail out in the real world.
Certainly, his thinking in the book is not always crystal clear, but what would you expect?
Super intelligent robots in a real, complex world jammed with duplicity and double-dealing
can't be that easy to describe! Predicting the future in such worlds must be even more difficult.
Nevertheless, Asimov pushes forward and concludes:
The future will see autonomous, super intelligent robots
guided by the three laws
- Sure, robots can make plans in our world.
- Sure, robots can go for the ultimate Pax Robotica
- And surely
that will be new untested waters :-)
Almost every human activity carries
Consequently, conscience stricken
robots like R.Daneel and R.Giskard,
can not permit most of them. It follows from the three laws of robotics
. A robot may not injure a human being or,
through inaction, allow a human being to come to harm.
. A robot must obey the orders given to it by human
beings except where such orders would conflict with the First Law.
. A robot must protect its own existence
as long as such protection does not conflict with the First or Second Law.
As Stephan H.M.J. Houben (email@example.com)
wrote in a previous usenet post:
> Any group of Asimovian robots worth their salt will
> immediately round up all humans and put them in a
> Matrix-like computer simulation.
> Of course, when you ''die'' in this simulation, you wouldn't die
> in reality (that would violate the First Law),
> you just get a mind wipe and be reborn.
R.Giskard puts the same idea slightly different to his friend
R.Daneel in Robots and Empire
"It is not sufficient for us
to choose (between different evolutions
of human society),
friend Daneel. We must shape
a desirable species, our own
, and then protect it,
finding ourselves forced to select among two or more
The scene is set for the horrors of a First Law violation, when R. Giskard
continues with his ideas about good
''When we think of humanity,
we must save, we think of the Earth people
and the Settlers.
The Settlers are vigorous, more expansive.
The Settlers show more initiative
bacause they are less dependent on robots. They have
a greater potential for biological and social evolution,
because they are short lived,
though long lived enough
to contribute great things individually''.
So, in the end of Robots and Empire
destroys the Earth
in order to create a better galactic
human civilisation. Justified by
his own Giskardian Reformation, the Zero'th law:
. A robot must act in the long-range interest of humanity
as a whole,and may overrule all other laws,
whenever it seems necesary for that ultimate goal.
Robots and Empire
balances between two views:
I.e. on the one hand - the super intelligent
robots, working under the three laws of robotics
have effectively turned humanity into pets under robot control.
(Human) free will is an illusion, as it only exists within
boundaries R.Daneel and friends think safe.
On the other hand - it is still important for the robots that
humanity is creative and shows initiative.
I.e. robots are only helpers, they are not actually running
Either way, robot plans - thats not small potatoes
Note, even when the robots (in Robots and Empire
) back down from decision making,
and only plan to help - They easily become so gifted in helping that they are in reality
taking all decisions away from the ones they help?
And then we are again left with robots running the show.
In Asimovs Caves of Steel
R. Daneel has
This lets him do basic empathy and a
And in Robots and Empire
R. Giskard supposedly
gives R. Daneel the ability to read minds.
Actually, later in the Foundation series R. Daneel has the ability to ''write minds''
and change human emotions.
Certainly, such qualities kind of confuse the roles of human masters and robot servants.
And whereas humans are shortlived, R.Daneel ends up being at least 20.000 years old.
Actually, there is an interesting issue
concerning Daneel's lack of continuity:
"Even my positronic brain
has been replaced on five different occasions." he told Trevise
(In the Foundation series).
"My present brain ...is but six hundred years old..."
So, is R.Daneel in reality many robots? Or are we just talking hardware maintenance?
I am not quite sure what Asimov actually meant here. But, certainly, Asimov hints
that an intelligent entity can't go on for 20.000 years straight without maintenance.
And, if (Asimov think) it is better for humans to be short lived -
then why is the same not true for (his) robots?
Anyhow, it probably won't matter that much (to shortlived humans in a future Asimovian
universe), whether the robots (masters) have lifespans of 600 years or 20.000 years.
One thing is for sure though, as intelligent robots entered the equation,
humanity started living under the Pax Robotica
And through Asimovs genious
we got the first glimpses of that world....
-- Posted on Usenet: 08-02-2003 --
Page revised Dec. 2008.
Picture is from the Adaptive Machine Systems
lab in Osaka Japan. Nov. 2008.