Discussing ethics is a little fruitless, at least if you like reaching conclusions. Generally they are rules that govern a particular area or school of thought: medical ethics, political ethics, social ethics — in any given setting, there are ways in which you ‘should’ act or behave or even think.
Fortunately, due to us pesky humans being at the top of the food chain, it’s been fairly easy to decide what is and isn’t ethical: that which helps mankind is good; that which harms mankind is bad.
But… how would you about creating a system of ethics for that which isn’t human?
If you can save a human or a cat from falling into a chasm, you save your fellow man.
What if the cat has to decide whether he saves you or another cat?
* * *
The ‘classic’ Robot Ethics example is this:
If a robot murders, who is accountable?
Robots can not yet program themselves; so must the designer be sent to jail?
Robots can not yet build themselves; so must the engineer be sent to jail?
Or… can we actually blame the robot? What good is justice, jail or the death penalty if the robot does not feel? If a robot is a senseless, emotionless killing machine, will justice have been served by just unplugging the robot?
* * *
Now the really sticky bit: what if we (somehow) create robots in good conscience, robots that never murder, never steal — robots that always act ‘ethically’. What if, as they would surely follow in the footsteps of their human creators, they learn to program themselves? What if robots can build themselves?
This is all a very old train of thought but it ties in with the question I asked yesterday: ‘What makes us human?‘ — at what stage do these robots become sentient, self-aware? Better yet: if you unplug a sentient robot, do they cease being self-aware? If there’s a soul, what happens?
In the original falling-into-a-chasm example, you don’t hesitate to choose the human as more important than a cat. What if a robot has to choose between saving one of us, or another robot? What’s the ethical choice from the robot’s point of view?
<Mind explodes>
* * *
Back to humans and humanity. What happens when we finally play around with cybernetic brain implants? Does this become a religious or spiritual issue? If having a soul is what separates us from the rest of the food chain, surely we must somehow look after this tenuous physical/spiritual link; would modifying our brain with artificial technology alter or sever that link; would it make us soulless?
At what stage do we, by definition, become robots?
Looking into Pandora’s box I can see another nastier, gloopier issue: what if we’re already soulless? What if there’s really nothing to differentiate us from our finely-engineered robotic brethren? Would that just make us our android overlord’s herd of cattle?
AGD
Sep 30, 2009
My main complaint is with your first two paragraphs, which seem to contradict each other. Ethics is hard in line one, easy for humans in para two. I also don’t think that the appeal to mankind as a whole is the basis of ethics; certainly it’s explicitely part of utilitarianism but it isn’t a necessary part of ethics as such.
If a robot doesn’t have the capacity to feel or reason, then punishment would indeed be pointless; the robot remains a mere tool of the creator (or customer) and is best treated as you would a badly-made car. If robots become genuinely autonomous, then they would/should count as equivalent to similar humans in all considerations. This includes the chasm-related scenario.
And that ‘soul’ stuff? Let’s not worry about something we haven’t ever observed.
sebastian
Sep 30, 2009
I split the first bit from the rest, as I wasn’t certain about its truth (with you in mind even…!) — but I’m also aware, when talking about this particular subject, accuracy or verisimilitude is probably never going to be too closely approached.
So ‘feeling’ is the linchpin, at least for robot vs. human ethics? But when it comes to survival, it should simply be ‘of the fittest’?
That’s why I mention the ‘soul’ bit. If we create robots that are identical to us in every way, but stronger, faster, longer-living — should we not try to differentiate between the two species?
I guess I’m asking: In the chasm scenario, when do robots become equivalent to humans? Why would robots ever be ‘worth’ more than a pet cat?
Ed Adams
Sep 30, 2009
These issues are similar to those delt with in the movies A.I., I.Robot, and The Matrix (as well as others).
Several prominant computer programmers and robotics engineers of the last 20 years, while boasting of the technological advancements and the importance of such, have declared that artificial intelligence will never be on the level of “human” because of the issues of souls, morals, ethics, and feeling. When it all boils down to the core, machines are just a series of 1′s and 0′s.
As for humans, I believe that we are a bag of bones powered by a soul. Once the soul is gone, life is gone. Yes, the bag of bones can be re-animated, but if the soul is gone, it’s still just a bag of bones. The soul is what seperates us from the animals and machines. It’s what gives us our deep inner conscience. The moral right and wrong. Ethics may be learned, per say, but even a small child knows when something’s “not fair”. Whereas, even people who are “brain dead” may have their bodies kept alive by machines, the soul has moved on and “life” has ceased.
But, that’s just my opinion. And like assholes, everybodies got one.
sebastian
Sep 30, 2009
It’s a valid opinion, certainly.
But I still want to know, if there is a soul, at what stage of robotic enhancement do we become soulless?
Obviously more research needs to be done, re: brains and brain chemistry. And if we wait until we’re ‘masters of the brain’ (and thus ‘masters of the soul’?) what’s to say we can’t give robots a soul…
Ash
Sep 30, 2009
If you consider that technological implants would, in one way or another, sever our souls from whatever spiritual guidance human beings have, then we’re pretty much too late already.
Sure, cybernetic implants will mean that technology is INSIDE us, but when you look at our reliance on the Internet, on mobile technology (iPhone, BlackBerry, etc.) then all we’re doing is avoiding the technology being actually in us. We still have access to the same data that cybernetics would give (more or less); GPS, maps, email, phone, even now augmented reality is happening on a real level.
As for what makes us human… nothing, we’re a species, that’s all. Just as “cat” is a nickname for Felis silvestris catus, “human” is a nickname for homo sapian. When robots become inevitably self-aware they will become a species all to their own. We have a very limited view of “organic” – we believe that metal isn’t organic and that reproduction isn’t the same as production. I think in the coming centuries we’ll be forced to reassess that view.
sebastian
Sep 30, 2009
Ah, your comment sounds similar to AGD’s on yesterday’s post. What constitutes ‘playing God’? Surely we’re already doing it — so who cares if we go one step further… and another step… and another step…
I think as long as it remains external, it’s not so dangerous. Technology, at the moment, changes our environment. We can adapt to our environment.
If we actually start altering our brains/bodies themselves — and I mean to the extent that we are becoming a new species — then things might be harder. We’ve still ‘Homo sapiens’, as you say, but if that iPhone was somehow lodged in our brain… or we made it so that babies were born with iPhones installed in their brain… would we still be Homo sapiens?
We’re talking about ‘essence’ here, about what makes us human; what separates us from other species, and ultimately what sets us apart from other races — be they aliens… or robots…!
timoteo
Sep 30, 2009
I would love to tackle this question. I really would. But the brain is really dragging today, maybe I require some implants? haha. Anyway, I know you followed BSG and I thought that show brought up some really interesting points on the entire matter, especially from the cylon side i.e. Cavil’s rants on existence, his creators, and so forth.
Ash
Sep 30, 2009
Oh I am so glad someone else brought up BSG. I think that was the only show to effectively and sensibly tackle the “what makes us human” question.
I have to admit, I am fully in support of cybernetics and implants, etc. If it can make us stronger, faster, better, then let’s go. Who wouldn’t want to have an HUD overlay on their eyes. In a strange city, just blink and you pull up an augmented reality overlay of hotspots, landmarks, directions, etc.
I will fully embrace the future.
Mr. Apron
Sep 30, 2009
I take great exception to Ed Adams’s comment that ends with the claim that everybody’s got an asshole. Regrettably, Ed, a condition exists called “Imperforate Anus,” in which an infant is born without a normal rectal opening and surgery is, obviously, required to correct the situation.
I would have said “rectify” instead of “correct” but I can already hear the sniggering.
I am thankful, finally, for both my asshole and my opinions, which, when expressed, often make me sound like an asshole.
sebastian
Sep 30, 2009
Such valuable input, Apron… maybe you should be first on the list of robotic guinea pigs…
I think the term ‘anally retentive’ would, ironically, describe you quite well…
I’m also glad BSG poked it in a ‘modern’ theological/technological sense.
But I think we can’t really compare TV (or media) to real-world applications. In TV or film we live purely within the fantasy of the director and writer. Sadly, out here in the real world, we’re governed by laws of physics (?) that we don’t know much about.