Bender of Futurama, one of the most famous robotic androids in popular culture. What if he was a KILLER?!Discussing ethics is a little fruitless, at least if you like reaching conclusions. Generally they are rules that govern a particular area or school of thought: medical ethics, political ethics, social ethics — in any given setting, there are ways in which you ‘should’ act or behave or even think.

Fortunately, due to us pesky humans being at the top of the food chain, it’s been fairly easy to decide what is and isn’t ethical: that which helps mankind is good; that which harms mankind is bad.

But… how would you about creating a system of ethics for that which isn’t human?

If you can save a human or a cat from falling into a chasm, you save your fellow man.

What if the cat has to decide whether he saves you or another cat?

* * *

The ‘classic’ Robot Ethics example is this:

If a robot murders, who is accountable?

Robots can not yet program themselves; so must the designer be sent to jail?

Robots can not yet build themselves; so must the engineer be sent to jail?

Or… can we actually blame the robot? What good is justice, jail or the death penalty if the robot does not feel? If a robot is a senseless, emotionless killing machine, will justice have been served by just unplugging the robot?

* * *

Now the really sticky bit: what if we (somehow) create robots in good conscience, robots that never murder, never steal — robots that always act ‘ethically’. What if, as they would surely follow in the footsteps of their human creators, they learn to program themselves? What if robots can build themselves?

This is all a very old train of thought but it ties in with the question I asked yesterday: ‘What makes us human?‘ — at what stage do these robots become sentient, self-aware? Better yet: if you unplug a sentient robot, do they cease being self-aware? If there’s a soul, what happens?

In the original falling-into-a-chasm example, you don’t hesitate to choose the human as more important than a cat. What if a robot has to choose between saving one of us, or another robot? What’s the ethical choice from the robot’s point of view?

<Mind explodes>

* * *

Back to humans and humanity. What happens when we finally play around with cybernetic brain implants? Does this become a religious or spiritual issue? If having a soul is what separates us from the rest of the food chain, surely we must somehow look after this tenuous physical/spiritual link; would modifying our brain with artificial technology alter or sever that link; would it make us soulless?

At what stage do we, by definition, become robots?

Looking into Pandora’s box I can see another nastier, gloopier issue: what if we’re already soulless? What if there’s really nothing to differentiate us from our finely-engineered robotic brethren? Would that just make us our android overlord’s herd of cattle?

Thoughtful Tuesday: Transhumanism
My party trick

Sebastian

I am a tall, hairy, British writer who blogs about technology, photography, travel, and whatever else catches my eye.

10 Comments

LEAVE A COMMENT

FEEDBACK