Tuesday, June 28, 2011

The Three Mild Suggestions of Robotics.

In a current project of mine, Sand in the Gears, robots have a prominent role. One thing however, I've noticed, is that I've completely written out any of Asimov's Three Laws, because frankly, humans will NEVER create robots that follow those laws. Ever.

Some may say "but, when we build super strong, fast, dangerous robots, we'll want those laws!" Well, you're already wrong.

First Law: A robot may not injure a human being, or through inaction, allow a human being to come to harm.
Some of our first robots, will likely be designed to kill our enemies. Half of our robotics are already designed to do so, only with minor human input like in unmanned aircraft... as soon as the army can remove that, they will.

Second Law: A robot must obey any order given to it by a human being, as long as it does not conflict with the first law.
"Go into that store, and bring me out expensive stuff, make sure no one is hurt." I attribute this to Asimov not spending a lot of time on the internet, where robots will be hacked and made to tie their masters down, and tea bag them for hours on end.

Third Law: A robot must protect its' own existence, unless it conflicts with the first and second laws.
Actually... this is probably the only one we'll keep to, because robots are expensive.

In the story, "Kale" a robot designed by the main character, is basically built with a duel processing system (not like a computer), meaning that whenever one thought is made, an immediate opposing thought is made as well, and Kale is allowed to believe both at the same time. (This is actually when one of the other characters accepts that Kale is a "Female" robot, because she has "fuzzy" logic).
Here's a fun example:
Elry: So, which came first, then? The chicken or the egg?
Kale: The Chicken.
Elry:... but where did the chicken come from?
Kale: In all probability, another chicken.
Elry: So... an egg?
Kale: Yes, it would appear so.
Elry: So, the egg came first then?
Kale: Yes, it would appear so.
Elry: But you just said the chicken did, you change you mind already?
Kale: No, if either choice predates the other, then both are first, and both are second.
Elry: That doesn't make sense Kale... the question is designed to be infinite.
Kale: The question is flawed, there can be no infinite in a finite universe.
Elry:... so if a tree falls in a forest...
Kale: One would not know, One has never been to a forest.

No comments:

Post a Comment