Do they deserve basic human rights?

Basic human rights?


  • Total voters
    86
Status
Not open for further replies.
What if a robot was programmed to learn from what it saw?

Again, in this hypothetical example that Angry Lawyer has posed, there is only one robot. So it wouldn't be exactly a drain on the population. Actually, America could use a better president.

Anyway, what would be its robot rights then?
 
yeah umm if it was programmed to learn from waht it saw, its still programmed and humans dont need to be programmed.

robot rights for example would be like humans but not being able to own a house or work in a bussiness and so on, remember robots are created to help people.
 
lol eejit, thats different, because humans arn't born lost in the wild over a year without contact with other humans, unlike robots that need to be programmed from the start.

but anyway thx for that, it was intresting
 
lol eejit, thats different, because humans arn't born lost in the wild over a year without contact with other humans, unlike robots that need to be programmed from the start.

...wut.

I think you just self-contradicted in the first paragraph, unless I'm misunderstanding you entirely. (I might be! English isn't my first language!)
 
No, none of these human quality's would truly exist, everything they'd express would be nothing but a mimic of human behavior, unless they actually did have feelings.
 
Dan is right. What are you proposing? A soul?

Dan is not right. I am not proposing a "soul", merely proposing the fact that artificial neurons, if programmed correctly, would be totally indistinguishable from biological neurons, and that consciousness does not arise from the fact that protein-based neurons are firing, but the pattern in which they are firing, in the same way that it doesn't matter whether musical information is burned onto a disk or etched into a record, its still the same music. It doesn't matter whether consciousness or sentience occurs in protein or silicon medium, it is still sentience, and any distinction we made based on what it "really" is would be totally arbitrary.

An artificial replica of you would be completely indistinguishable from you. It might even think that it was you. An exact artificial replica of a human being would behave exactly as a human being does. There's not some magical thing endowed to proteins and neurotransmitters that cannot be replicated, at least in principle, by artificial simulation and inorganic anolouge.

I read an article recently by Daniel Dennet which really speaks to this idea:
Daniel C. Dennett said:
1. Good and Bad Grounds for Skepticism

The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection, and operating according to the same well-understood principles that govern all the other physical processes in living things: digestive and metabolic processes, self-repair and reproductive processes, for instance. It may be wildly over-ambitious to suppose that human artificers can repeat Nature's triumph, with variations in material, form, and design process, but this is not a deep objection. It is not as if a conscious machine contradicted any fundamental laws of nature, the way a perpetual motion machine does. Still, many skeptics believe--or in any event want to believe--that it will never be done. I wouldn't wager against them, but my reasons for skepticism are mundane, economic reasons, not theoretical reasons.

Conscious robots probably will always simply cost too much to make. Nobody will ever synthesize a gall bladder out of atoms of the requisite elements, but I think it is uncontroversial that a gall bladder is nevertheless "just" a stupendous assembly of such atoms. Might a conscious robot be "just" a stupendous assembly of more elementary artifacts--silicon chips, wires, tiny motors and cameras--or would any such assembly, of whatever size and sophistication, have to leave out some special ingredient that is requisite for consciousness?

Let us briefly survey a nested series of reasons someone might advance for the impossibility of a conscious robot:

(1) Robots are purely material things, and consciousness requires immaterial mind-stuff. (Old-fashioned dualism)

It continues to amaze me how attractive this position still is to many people. I would have thought a historical perspective alone would make this view seem ludicrous: over the centuries, every other phenomenon of initially "supernatural" mysteriousness has succumbed to an uncontroversial explanation within the commodious folds of physical science. Thales, the Pre-Socratic proto-scientist, thought the loadstone had a soul, but we now know better; magnetism is one of the best understood of physical phenomena, strange though its manifestations are. The "miracles" of life itself, and of reproduction, are now analyzed into the well-known intricacies of molecular biology. Why should consciousness be any exception? Why should the brain be the only complex physical object in the universe to have an interface with another realm of being? Besides, the notorious problems with the supposed transactions at that dualistic interface are as good as a reductio ad absurdum of the view. The phenomena of consciousness are an admittedly dazzling lot, but I suspect that dualism would never be seriously considered if there weren't such a strong undercurrent of desire to protect the mind from science, by supposing it composed of a stuff that is in principle uninvestigatable by the methods of the physical sciences.

But if you are willing to concede the hopelessness of dualism, and accept some version of materialism, you might still hold:

(2) Robots are inorganic (by definition), and consciousness can exist only in an organic brain.

Why might this be? Instead of just hooting this view off the stage as an embarrassing throwback to old-fashioned vitalism, we might pause to note that there is a respectable, if not very interesting, way of defending this claim. Vitalism is deservedly dead; as biochemistry has shown in matchless detail, the powers of organic compounds are themselves all mechanistically reducible and hence mechanistically reproducible at one scale or another in alternative physical media; but it is conceivable--if unlikely--that the sheer speed and compactness of biochemically engineered processes in the brain are in fact unreproducible in other physical media (Dennett, 1987). So there might be straightforward reasons of engineering that showed that any robot that could not make use of organic tissues of one sort or another within its fabric would be too ungainly to execute some task critical for consciousness. If making a conscious robot were conceived of as a sort of sporting event--like the America's Cup--rather than a scientific endeavor, this could raise a curious conflict over the official rules. Team A wants to use artificially constructed organic polymer "muscles" to move its robot's limbs, because otherwise the motor noise wreaks havoc with the robot's artificial ears. Should this be allowed? Is a robot with "muscles" instead of motors a robot within the meaning of the act? If muscles are allowed, what about lining the robot's artificial retinas with genuine organic rods and cones instead of relying on relatively clumsy color-tv technology?

I take it that no serious scientific or philosophical thesis links its fate to the fate of the proposition that a protein-free conscious robot can be made, for example. The standard understanding that a robot shall be made of metal, silicon chips, glass, plastic, rubber and such, is an expression of the willingness of theorists to bet on a simplification of the issues: their conviction is that the crucial functions of intelligence can be achieved by one high-level simulation or another, so that it would be no undue hardship to restrict themselves to these materials, the readily available cost-effective ingredients in any case. But if somebody were to invent some sort of cheap artificial neural network fabric that could usefully be spliced into various tight corners in a robot's control system, the embarrassing fact that this fabric was made of organic molecules would not and should not dissuade serious roboticists from using it--and simply taking on the burden of explaining to the uninitiated why this did not constitute "cheating" in any important sense.

http://ase.tufts.edu/cogstud/papers/concrobt.htm
 
Well the OP had a purely hypothetical situation where this thing does in fact have sentience and feelings.
 
Whoever has played Mass Effect will vote yes!

Just because we only know life to be born organic doesn't mean in the far reaches of space there isn't a cybernetic life form.
 
clear.gif
 
Well, Asimov had a lot to say about that. He concluded that no, no matter how "intelligent" a machine should never have the rights of a human, that humans should always come first. I suppose I agree with this.

Issac Asimov said:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


However things get quite complicated when it comes to 'fairness' and when the emotion is no longer emulated but 'felt'... here comes the Zeroth and most ill defined law of robotics :x How on earth is a machine expected to fully realize what is for the better of humanity? What defines better? Would the robot start killing trailer trash and drug dealers because it believed society would be better off without them? To what extremes can this ideology (read: programming) be taken?

Wikipedia said:
Zeroth Law added

Asimov once added a "Zeroth Law"—so named to continue the pattern of lower-numbered laws superseding in importance the higher-numbered laws—stating that a robot must not merely act in the interests of individual humans, but of all humanity. The robotic character R. Daneel Olivaw was the first to give the Law a name, in the novel Robots and Empire; however, Susan Calvin articulates the concept in the short story "The Evitable Conflict".

In the final scenes of the novel Robots and Empire, R. Giskard Reventlov is the first robot to act according to the Zeroth Law, although it proves destructive to his positronic brain, as he is not certain as to whether his choice will turn out to be for the ultimate good of humanity or not. Giskard is telepathic, like the robot Herbie in the short story "Liar!", and he comes to his understanding of the Zeroth Law through his understanding of a more subtle concept of "harm" than most robots can grasp. However, unlike Herbie, Giskard grasps the philosophical concept of the Zeroth Law, allowing him to harm individual human beings if he can do so in service to the abstract concept of humanity. The Zeroth Law is never programmed into Giskard's brain, but instead is a rule he attempts to rationalize through pure metacognition; though he fails, he gives his successor, R. Daneel Olivaw, his telepathic abilities. Over the course of many thousand years, Daneel adapts himself to be able to fully obey the Zeroth Law. As Daneel formulates it, in the novels Foundation and Earth and Prelude to Foundation, the Zeroth Law reads: "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

A condition stating that the Zeroth Law must not be broken was added to the original Laws.

A translator incorporated the concept of the Zeroth Law into one of Asimov's novels before Asimov himself made the Law explicit. Near the climax of The Caves of Steel, Elijah Baley makes a bitter comment to himself, thinking that the First Law forbids a robot from harming a human being, unless the robot is clever enough to rationalize that its actions are for the human's long-term good. In Jacques Br?card's 1956 French translation, entitled Les Cavernes d'acier, Baley's thoughts emerge in a slightly different way:

Un robot ne doit faire aucun tort ? un homme, ? moins qu'il trouve un moyen de prouver qu'en fin de compte le tort qu'il aura caus? profite ? l'humanit? en g?n?ral![14]

Translated back into English, this reads, "A robot may not harm a human being, unless he finds a way to prove that in the final analysis, the harm done would benefit humanity in general."
 
I think they should have their own rights developed. As a new life form created by us, we have the responsibility to sort out their legality. We should not simply apply human rights to artifical beings. I do not believe they are compatible. At best, their rights should be similar to a child. They do not need to go off on their own life journey and we should not program them to have this need, regardless of intelligence or ability or "feelings". If an entity is programmed with "feelings", it needs to have rights protecting those feelings to the same extent we would have our feelings protected. Ofcourse, they also need to be treated as property. For example, if someone were to assault an A.I., it would be wrong to charge the person with a crime worthy of bodily harm, since an A.I.'s body is easily and cheaply repaired. Restitution would be due to the "owner" or caretaker of the A.I. and maybe an apology or some butlery to amend it's "feelings" so that it may know justice.
It is my opinion we should think long and hard about creating competition for ourselves. I don't fear robot rebellion, but there could be disastrous social consequences if real A.I. somehow ran around being better than people at things.
 
as someone said before fu**em there robots, why would humans create robots then give them human rights, how does that help us.
 
Keep in mind that even with rights simular to ours if they are programmed to really want to do what we want it to do it won't matter, if you get what I'm trying to say.
In other words if we designed them so that their major goal and ambition in life is to serve mankind then even with rights they are going to do what we want anyway.
 
Movies aside (as everyone seems to have brought them in), personally, I'd say Yes. If they can feel and think, have personalities, opinions and create art, then yes, they deserve "Human Rights". Of course, they wouldn't be human rights, because they're not Human. But they definitely deserve the rights to live and the like.

As to the question "Why?". Well, I'd ask you the question, "Why does a human deserve their rights". If we assign an entity to every body, and that entity is an individual "soul", what we would define as "you" would be that entity. It is what makes you who you are. Sure, you could go really scientific and talk about how it's just the arrangement of electrons and shit, but when it comes down to rights and morality, we can't think of it like that, or we'd all end up with no rights, and no-one would be responsible for what they did. But that robot's entity would also be a "you". It would actually be someone who thought and decided how to do something, or make a choice, or felt about what they wanted to do.

And most importantly of all, that robot would ask themselves "Do I want to die?". It would be a question with an inevitable response. "No" (assuming hypothetically, according to the parameters you gave me). This will to live, and the sadness and the depression which formed from considering his death would be what defined him as a person.

The bottom line is, we're just a bunch of neurons and wires, biologically, sure, but we're still just a circuit when it comes down to it. And if it were machines that had evolved on this planet, and successfully created a biological, sentient lifeform, they'd ask the same question. Does it deserve Robot Rights?

Do any of us deserve Equal Rights?
 
For the people who say f*ck robots, here's a thought. We're not trying to create intelligent robots just to serve us. We're also a lonely species, and so far we haven't found any aliens to talk to. Plus, the pure pursuit of knowledge etc.
 
For the people who say f*ck robots, here's a thought. We're not trying to create intelligent robots just to serve us. We're also a lonely species, and so far we haven't found any aliens to talk to. Plus, the pure pursuit of knowledge etc.
Also, man loves playing god. What better way than to create a subservient creature in our own image?
 
Of course the robots will wipe us out, send Earth back into a pre-industrial age, forget we ever existed and start wondering about a Creator.

Of course then they'll start advancing technology again, develop biological technology, and then develop a human.

The human will not be given Robot Rights, it'll form an army and kill the Robots.

Cycle continues...
 
For the people who say f*ck robots, here's a thought. We're not trying to create intelligent robots just to serve us. We're also a lonely species, and so far we haven't found any aliens to talk to. Plus, the pure pursuit of knowledge etc.

Why do we need robots to talk to? Why can't we talk to each other?
 
Because there are people like me in the world.

One day we're going to get wiped out, and boring conversation will be gone with it.
 
Why do we need robots to talk to? Why can't we talk to each other?
We know each other too well, I suppose. I know that I find conversing with aliens more exciting than talking to a stranger. Isn't that why talking to animals has always been a common theme in movies and books?
 
For the people who say f*ck robots, here's a thought. We're not trying to create intelligent robots just to serve us. We're also a lonely species, and so far we haven't found any aliens to talk to. Plus, the pure pursuit of knowledge etc.

Indeed, robots may very well be valued members of our society someday, and may even replace us.
 
Well, Asimov had a lot to say about that. He concluded that no, no matter how "intelligent" a machine should never have the rights of a human, that humans should always come first. I suppose I agree with this.
That's not true at all.

Read Bicentennial Man. Asimov had a lot of themes with robots desiring to become human, and the tragedy of their intelligent existence without rights.
 
To say that humans should be valued over robots is preposterous and selfish. Robots will no doubt end up being more intelligent, reasonable, logical, and generally better individuals that us, ourselves. There's a lot less behind your brain than you may think.
 
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

This sounds very reasonable.

Sure robots will probably become more intelligent and superior to humans, but that doesn't change the fact that they serve mankind. If we gain the ability to create robots this complex, their only purpose will be to help and serve humans, carry out research, etc. We really don't need to create robots just because they are smart and it's nice to have them around. Thus, once we get to the stage where we can create these "smart" robots, they will only need those 3 rights above because their main purpose will be to serve mankind. Although I agree that the higher intelligence and reasoning of robots would merit them acquiring more rights, I really don't think it will happen.
 
So you're saying a robot who has feelings, and ask questions like "Where did I come from?" or "Why am I here?" should be put after a human being?
 
People are assuming the robot is built with the smartitude coded in directly.

What if the conciousness appeared due to emergence? Like, by chance, due to the way the software architecture is made, it gains the mind of a newborn child, but with a much increased learning rate? It's not been built to think the way we choose it to, but has its own learned mind. Should it still be a second-class citizen?

-Angry Lawyer
 
I assumed it was hypothetical. It wouldn't matter, either way. If we created something that has its "own mind", we are responsible for it.

We're constantly producing things which have their own mind, and the're called babies.
 
And let me tell you, babies do a lot less than robots, and THEY get rights.
 
If a species creates something superior to them, they deserve destruction.

In all seriousness if we created sentient AI, I'd probably end up attempting to destroy the program for the good of mankind.
 
If a species creates something superior to them, they deserve destruction.

In all seriousness if we created sentient AI, I'd probably end up attempting to destroy the program for the good of mankind.

That doesn't make any sense.

A species creating an intelligence superior to themselves is a great achievement...
 
It kind've befuddles me, though. Creating something better than yourself. It's like creating 500W sound with a 200W speaker or something...
 
What if the conciousness appeared due to emergence?
They say that any neural network can show intelligence (even sentience) if it's complex enough. Who knows, perhaps we've created A.I. already? Even if we did we wouldn't know about it because (a) the machines we have today are bound by rigid rules, so even if it was intelligent it couldn't do anything; (b) How would it communicate with us?

Also, do you think it's possible that planet earth itself has a rudimentary conciousness? It's a complex network after all. Some of it's systems possess memory.
 
You can't say that. The machines we've created are defined by the rules that bind them.
 
do they deserve human rights? no, they're not human ..you're looking for AI rights, different kettle of fish altogether



I wonder how many people would have said yes had it been animal rights? probably a lot seeing as how so many of you are advocating for a hunk of metal having rights, so naturally you'd all say yes to real flesh and blood, right?


in before "animals have the right to be tasty" comments
 
I think the underlying question is to whether they deserve the right to live.
 
do they deserve human rights? no, they're not human ..you're looking for AI rights, different kettle of fish altogether



I wonder how many people would have said yes had it been animal rights? probably a lot seeing as how so many of you are advocating for a hunk of metal having rights, so naturally you'd all say yes to real flesh and blood, right?


in before "animals have the right to be tasty" comments
Pff, you can't eat robots.
 
do they deserve human rights? no, they're not human ..you're looking for AI rights, different kettle of fish altogether



I wonder how many people would have said yes had it been animal rights? probably a lot seeing as how so many of you are advocating for a hunk of metal having rights, so naturally you'd all say yes to real flesh and blood, right?


in before "animals have the right to be tasty" comments

It's a similar concept, to be honest - the major divider here, however, is the machine being sapient. If the machine had animal-intelligence, I bet most people would vote to be able to put it down when they please.
Interesting thought, though - if cows were proven to be sapient (but silent), would people give them rights?

-Angry Lawyer
 
Status
Not open for further replies.
Back
Top