Ethics and Artificial Intelligence

Originally posted by Magicpants
Err... so if AI ....and graphics are really good. Are we gonna get some funky VR attachments so we can 'interact'?
'Red Dwarf style groinal attachment comes to mind'

That said, im working and they may notice if i start grinding away...

Hehe, Red Dwarf nostalgia! :thumbs:
 
Originally posted by junkie
Have you ever seen the movie "13th Floor"?

Actually no. I'll keep an eye out for it coming on TV...

I'm not gonna start a flame either. I mean do we actually know what the point of life is anyway? The main objective could be to kill as many other things as possible!!

Score Sheet :
13 spiders = 13000
352 flies (19 swallowed) = 371000
82,380,122 micro-organisms = 82

Total Score : 384010
 
Nice thread.
I would like it if an enemy would beg for its life.On the floor crying then the choice would be difficult.Especially if it was human like.
 
Originally posted by FriScho
so no reason to think about such stuff in the next uuuuh... 10 years

It may happen very quickly.

Look how quickly the last 20 years technological advancement has went compared to the few thousand before that!
 
Originally posted by MrD
It may happen very quickly.

Look how quickly the last 20 years technological advancement has went compared to the few thousand before that!

Maybe in 100 years, all this COULD be possible.
 
or it may never happen, we still drive cars based on very old ideas and same to computers (with bits and byte). We would need a complete new computer technologie not based on transistors and 0/1 but on neurons, biological dynamic networks with dynamic chemical connections. Talk about Science Fiction...

I think we will get the power of the Earth Simulator Supercomputer in Japan in our future handhelds. But I don't think, we will be able to create REAL artificial life and thinking in the next decade. Maybe in 100 years, but then I'm already dead and could care less :)
 
I don't think anything close to real self-aware computers will be created in our lifetimes, although I do think great strides will be made in the field.
 
Originally posted by Tropico
Nice thread.
I would like it if an enemy would beg for its life.On the floor crying then the choice would be difficult.Especially if it was human like.

hmm looks like something in that "judge dredd vs death" game... nah i wouldnt like it in halflife :/
 
It will be an important discovery that will start it all off. A bit like how the transistor started the computer revolution. We need something to start the AI revolution...

neuristor anyone?
 
ai is so far from human intelligence and hl2 ai is even further.
 
The only time you would reach an ethical delima with artificial intelligence is if it could contemplate its own existence and the consequences of ceasing to exist. I don't think technology will ever reach that level of self-awareness as I think self-awareness of that magnitude, which humans have, can not be artificially reproduced. While some people are of the opinion that humans are nothing more than organic machines that could be reproduced with sufficiently advanced technology, I am of the opinion that there is much more to human existence than the mere physical and chemical reactions in our bodies. It is the intangible qualities of the soul and spirit that will defy artificial reproduction, and I believe they are the key to true self-awareness.
 
let me put this spin on your question, is it morally wrong to kill other players in online games? If the AI doesn't actually cease to exist, you just stop it's game construct, then it's indestinguishable from fragging your friends in a CS round.
 
I dont mind killing anything in games, hell i would have no problem shooting the pope in HL2.
 
Put a "Virtual" gun in anyone of our "Virtual" hands....and we ARE going to shoot it....

It's that simple.
 
i think that were we able to produce a self aware program/machine/etc that was superior to us then it would be the greatest mistake ever.

Naturally we would feel threatened by our creation and would react in a hostile manner to it. in turn they would act to preserve themselves and we'd all die.

*points to storylines such as

exosquad
Terminator 3
Matrix
Existenz (to a degree)

I very much doubt that if self aware A.I. were in a game, it would like being fragged over and over again unless we programmed that interaction to be a fun experience for the computer too.

Furthermore, what separates us from machines? We use complex system of chamicals, chemical reactions, and impulses. Big frigging woop de doo.

Now, if we made a machine that didn't use chemicals but wires, motors, and electricity that could think just as or faster than us, what makes us seem so great in comparison, what makes us so special?

nothing.

sure many religions say we have a soul, but would a sentient robot care? we would be bags-o-chemicals to them.

oh yeah and i'm against causing actual pain to an AI, i'm not a sadist.
 
Originally posted by Mountain Man
It is the intangible qualities of the soul and spirit that will defy artificial reproduction, and I believe they are the key to true self-awareness.

IMO logic would suggest otherwise. As humans we don't actually develop full "self awareness" until we reach mid-childhood.

Young children have no concept of what it means to "die", you simply cannot explain it to them. It would seem to me that becoming self-aware (as we understand it) is just the side effect of reaching a certain level of intelligence, just like you need to reach certain stages of development before you have the intellectual capacity to understand speach etc.
 
But self-awareness just happens as a matter of course. We don't learn it over time. Frankly, we can't even explain what self-awareness is or how it is achieved, and we have been trying to answer the age old question of "What is the essence of existence?" for thousands of years, yet we are no closer to answering it today than when Socrates and Descartes walked the earth.

So if we don't understand it, how could we create a mechanism that will suddenly manifest these qualities?
 
hmm over the past 8 years ive sent approximately........

*remembers all his games of starcraft/warcraft/sudden strike/age of empires/empire earth*

500,000 plus little soldier guys to hell.

lets hope they dont hold a grudge.
 
Originally posted by Mountain Man
So if we don't understand it, how could we create a mechanism that will suddenly manifest these qualities?

Easy, we study something that already manifests these qualities, and figure out how it works! Or we could just try to copy it without bothering with the how-it-works part.
 
If the npc's are true AI an living lives going about their business an I smoke one of them an they hold their chest an writhe in pain an die...an then I reload a saved game an go up to the same guy an say wassup an go on my way...does that mean reincarnation is real? OH MAN what if GOD decides to reloead a saved game tonight...WE would all have to wait for halflife2 all over again..AAAAAAAAAAAAAAAH
We're all just Ghosts in a Machine...maybe God will upload me a Girlfriend upgrade today.....Sweeeet
 
Oh an If its gunnin for me I'm damn sure gonna kill it...true AI or not
 
Originally posted by MrD
Easy, we study something that already manifests these qualities, and figure out how it works!
This must be a usage of the term "easy" that I was previously unfamiliar with.

More to the point, humans uniquely posses the quality of self-awareness (to my knowledge, no other organism on earth possesses this quality, at least not to an extent that we have recognized it; and please understand that basic survival instincts are completely different from the ability to contemplate one's ability to cease to exist). Humans have been studying each other for the whole of our existence--as you said we need to "study something that already manifests these qualities"--yet we haven't even begun to answer basic questions like "What is the nature of existence?" despite some of the most brilliant minds having spent considerable time trying to solve this riddle.
Or we could just try to copy it without bothering with the how-it-works part.
So you're assuming a simple copy would automatically obtain the qualities of the original (which begs the question, how do you copy something without understanding its function?). That's like saying stringing atoms and molecules together in the correct order will spontaneously create life. Sorry, but the universe just doesn't work that way.
 
I thought about this problem myself when I saw the NPCs in HL². I'd try to make an avatar a realistic copy of myself and try to kill it while it begs me for my life. Or even better I would have asked my mother to kill the digital me in the game. I wonder what was their reaction to this (and mine of course)... But we'll find out when the HL² is released... But things are getting ethical and philosophical here, the fun is saying goodbye to those who have conscience.

I think the problem is not in the A.I. but in the player himself. The more realistic game is becoming the harder it gets to say what is more real. I belive that when I play game, my mind is perceiving other reality (even when I play Tetris) and acting by the laws of the game. At the same time I ignore the real world should I say suppres. I'm not hungry nor thirsty, don't feel pain and I don't blink unless I don't make this conscioulsy. This gave an idea that body is under control of mind who detects reality in different ways (what may cause that you drop dead because you've been behind the computer about a week).

Thus, I belive that if the game would get real as the would real world itself it would matter how you react in the game. I think HL² is ground breaking here. I belive the procedures that are going on in the player's head are similar to those when he is living the real life. But HL² is opening a new set of procedures, the ethical ones.

It isn't ethical to kill life-like NPCs, not because the NPCs act like real people but because the player acts like a real man.
 
Who cares if we understand the brain, we need only to accurately scan it and recreate the connections to produce a working, thinking brain-like computer. It's like making a photocopy: you know how the photocopier works, you know something about what's on the original paper, but you don't know the location of every spotch of toner that the photocopier places on the blank sheet, and the final copy is pretty damn good.


^^^and in response to what AI says, that's the same effect like all the nifty physics in HL2 etc: you'll be so used to just blasting enemies, you'll never think to block a door with a table - it just never crosses your mind. You've suppressed your real-world logic in favor of the game one. As better and more 'realistic' games come out, we'll need to start un-supressing I suppose...
 
Originally posted by FictiousWill
Who cares if we understand the brain, we need only to accurately scan it and recreate the connections to produce a working, thinking brain-like computer.
I don't think you appreciate the enormeous complexity of this undertaking. There is much, much more to the human mind than simply making sure the proper neurons fire in the correct order.
It's like making a photocopy
Surely I don't need to explain to you that copying the human mind is considerably more complex than photocopying words on a page. In fact, that analogy is so ridiculous as to be irrelevant.

Regarding the gameworld rules: in most games, the people you are expected to kill are in some way deserving. Sometimes you are given their backstory and are aware that they must pay for past misdeeds. Other times, they are simply "bad guys" and their exterimination is a matter of survival. Rarely are gamers provoked into killing innocent beings for no reason at all. I don't see why these gameplay rules would have to change just because of advanced AI. A character may act incredibly life-like, but if the gamer is placed in a kill-or-be-killed situation, I know few of us would feel any remorse for having to "defend" ourselves.
 
Why has everyone decided we must be here for a reason? Everyone talking about only humans can contemplate life, existense, and why we're here. I don't care why I'm here. No one is here for any one given reason. I believe I'm here through certain circumstances unique to myself. No one has lived my life on this planet. I'm insignificant in that respect. We all are. Just because some people ponder the afterlife doesn't mean it exists. And when was the last time you chatted to a cat about reincarnation? They could be thinking the same things I am.

Bla blah, that was a serious tangent. The only time I would NOT kill an AI being is when it changes the outcome of my life/game. In HL, it was just kill, kill, kill...the ending was the same whether I used a knife or an RPG. Like, if you killed a scientist, you couldn't open the next door. That has no consequences, just reload the game and do it over. I hope in HL2 it WILL have a direst effect on how Alyx talks to me, how the zombies attack me OR take my side. Then, I will change my thinking, but morally, I have no problems killing robots.
 
man yall are a bunch of nerds. who cares about this crap. yall need to get laid
 
Originally posted by Mac
man yall are a bunch of nerds. who cares about this crap. yall need to get laid
Obviously you don't care about it. All the more reason not to post on a thread about it.

Well, I just read through this thread, and it's quite interesting, so here's my take on it all:

The crux of this whole issue is, believe it or not, spirituality. Hold that thought while I explain :)

Lets say, hypothetically, we create a program that can learn. It's programmed very well, and it starts to learn how to learn more efficiently, what's important to learn etc, and it is programmed to (and given the ability to) make changes to itself. By the logic of some of you on this thread, eventually, this program will suddenly "wake up" and become self aware as a result of reaching a certain level of intelligence. That's bull. The truth is, no matter how complex a computer program acts, or how well it mimicks human behaviour, it's still just little electrical signals, carrying 1s and 0s, being processed at rediculously high speeds, to create the illusion of intelligence. While a program could become advanced enough to create a perfect illusion of intelligence, it's still a program. It may, as a result of being created with the intention of mimicking humans, beg for it's life, or appear to be afraid of death, but in reality, there is no consciousness there to percieve that fear.

Think about your own mind. Sure, you have images coming in from your eyes, information coming in from all your senses, then your brain processes them, decides how to react, and does so. Sounds pretty computer-like and automated, right? So why do you percieve it? The notion of "I think therefore I am" comes to mind. You exist. You are aware of your existance. You are not your body, you are not even your mind. You are a singular awareness.. a consciousness. At the moment, science can't explain what this "consciousness" is, but that's only because it's not advanced enough. For now, the only thing we can describe it as is "THE SOUL".

Now you're probably thinking "oh look, just because science can't explain something, you turn to religeon". That's rubbish too. I'm a believer that both science and religeon should be the same thing. They are both the search for the truth. Eventually, science will be able to explain what religeon fills in for at the moment. To me, the term "soul" is just a word that best describes what science is yet to be able to. It's my belief that science and religeon can co-exist, and I even have some theories that chop down any science vs religeon arguments out there.

Anyway, back on topic. So you ackgnowlege that you exist, right? Therefore you do. A computer program will never be able to do this. A computer program might be able to produce the words "I think therefore I am", but it will never be able to understand it. Sure, it will be able to break it down gramatically and process it as data, adding to a huge database of constantly changing information that may appear to act like a human brain, but there will never be an awareness in the computer. Computers can obtain data and information, but never knowlege. To know something, you must be aware of it, and awareness is what computers simply can never have (not using digital technology anything like today's, anyway).

So does this mean that no matter how advanced or human AI appears to act, it's still morally acceptable to kill it?

Not neccissarily. It's all about how you interpret things. It's completely up to your conscience. If you have ever (even if it was the first violent game you ever played) felt guilt for killing a NPC in a game, then you were subconsciously realizing that what you were doing was immoral. Perhaps not seriously, but minorly. Since then, you will have become de-sensitised to it, and think nothing of it. Is it still immoral? That's a very difficult question. Once you become accustomed to something, and it no longer seems immoral to you, your intentions are not bad, so I'd say it's not immoral. But then again, this is what happens when you commit immoral deeds in the real world - you become desensitised. That's why there are killers out there who don't feel a bit of guilt when they kill, even if they did the first few times. That's how people can live dishonest lives. They become used to it. Obviously games are on a lesser scale to the real world, but I guess the morality issue comes down to this question:

Is it immoral to do something you once thought was immoral, but are now used to? (like downloading mp3s, or pirated software, for example..I'm sure there'll be more than a few of you out there who do that)

If you don't believe so, then you can play your violent games with a clean conscience. You see, the thing about morality is that only you can know if you're doing the right thing or not. It's up to you to search yourself and find out. Even if you don't like the answer.

Edit: Hooooley cow, sorry about the long post. I could talk about this stuff for hours, this is only the tip of the iceberg!
 
Originally posted by Mountain Man
More to the point, humans uniquely posses the quality of self-awareness (to my knowledge, no other organism on earth possesses this quality, at least not to an extent that we have recognized it.

I never said this wasn't the case.

I only argue that self awareness is a side-effect of intellectual development. For example, think about what humans and dogs have in common :

1) when born they are basically "blank" (intellectually speaking)
2) at ABOUT THE SAME AGE they can both understand basic human words or actions ("NO!", "BAD!" etc.)

* --- this is about as far as a dog can go

3) Child then develops enough to understand basic speach, and to talk back. At this age you cannot explain what death is, except in terms such as "gone away to a better place". They won't understand the raw concept.

4) Child eventually develops enough to become "self-aware" as we understand it, and the concept of death makes sense to them.

Originally posted by Mountain Man
So you're assuming a simple copy would automatically obtain the qualities of the original (which begs the question, how do you copy something without understanding its function?).

Yes, I am !!!!

For a start, think about what the word "copy" actually means. If the copy doesn't behave in the same way as the original then you haven't "copied" it have you?!

Also, scientists have copied a pig. They have very limited knowledge of DNA, yet by copying it they can produce a new pig with the same properties as the original.

Originally posted by Mountain Man
Sorry, but the universe just doesn't work that way.

Then why does it work that way?!
 
i havent read all this, just the first 4 odd pages, so i apolgise if im repeating anyone, but if they did make fully self aware AI and all that, what about when u exit the game and turn your computer off, wont that be just as bad as shooting the ingame character?
 
Originally posted by Logic
It's programmed very well, and it starts to learn how to learn more efficiently, what's important to learn etc, and

Lets suggest it is a simulated human brain, rather than a program.

Originally posted by Logic
By the logic of some of you on this thread, eventually, this program will suddenly "wake up" and become self aware as a result of reaching a certain level of intelligence.

Thats my logic yes, but I wouldn't say "sudden"... more of a gradual thing, like learning how to speak. The reason I suggest this is because IMO small children are not fully "self-aware".

Originally posted by Logic
The truth is, no matter how complex a computer program acts, or how well it mimicks human behaviour, it's still just little electrical signals

Humans are just big bags of chemical and electrical signals. If it is possible for us to have a "spirit", then why assume a purely electrical intelligence cannot have a "spirit" ??

Originally posted by Logic
there is no consciousness there to percieve that fear.

This is possible, but how do you know that?

There is no test that could determine this. Why? Well, if there was then you would have found a difference in behaviour (between the artificial intelligence and real human intelligence), yet by virtue of being an accurate replication of human intelligence it is impossible for there to be any difference.

If you cannot measure it, does it really exist?

Once again, logic suggests that "conciousness" is simply a side-effect of intelligent thought. It doesn't really exist, it just seems to exist.

Originally posted by Logic
A computer program will never be able to do this. A computer program might be able to produce the words "I think therefore I am", but it will never be able to understand it.

By our definition of what real "artifical intelligence" is: the computer is not programmed to produce any words. It is an accurate model of human (or other high-level) intelligence, ie. it learns its own behaviour from those around it.

Thus if the machine produces the words "I think therefore I am" without ever being told those words (programmed or otherwise) then there is only one possible conculsion: it is self-aware.

If that ever happens, will you still be so sceptical?
 
edit: double post (which is not my fault! there is a BIG ASS delay on these forums right now)
 
As long as the ai doesnt know pain, and just knows its role is to attack and fall over-end...then I have no problem with.
 
Originally posted by MrD
Humans are just big bags of chemical and electrical signals. If it is possible for us to have a "spirit", then why assume a purely electrical intelligence cannot have a "spirit" ??
How can you possibly assume that it can? Neither argument can be proven by today's science, but if there is a "soul", at what point do humans recieve theirs? Conception? Whatever the answer, how and when would a machine recieve it's "soul"? Since the machine is entirely created by us, we would have to actually create this "soul", and as you know, modern science can't even define what it is, let alone re-produce it. Of course, it is possible that what we know as the "soul" is just a part of our brain's normal functions, but like I said, neither argument can be proven with today's science.

Originally posted by MrD
There is no test that could determine this. Why? Well, if there was then you would have found a difference in behaviour (between the artificial intelligence and real human intelligence), yet by virtue of being an accurate replication of human intelligence it is impossible for there to be any difference.

If you cannot measure it, does it really exist?

Once again, logic suggests that "conciousness" is simply a side-effect of intelligent thought. It doesn't really exist, it just seems to exist.
The if you can't measure it does it really exist argument is a feeble one. If you close your eyes, does the world dissapear? If a tree falls in a forest with nobody around to hear it, does it make sound? Of course. The universe continues to exist and change whether we percieve it or not.

As for consciousness and intelligent thought... I'd say they go hand in hand. With one, you have the other. What I'm maintaining is that an electronic brain, or computer program, can not "think" intelligently, because it is incapable of percieving it's own thoughts. Consciousness is not an illusion.. if you believe so, you are denying your own consciousness, and are therefore forfeiting responsibility for any and all of your actions. You are accepting existance as a machine. A drone. Surely you realize that your awareness and conscious perception of your actions and thoughts allows you to be so much more.

Originally posted by MrD
Thus if the machine produces the words "I think therefore I am" without ever being told those words (programmed or otherwise) then there is only one possible conculsion: it is self-aware.

If that ever happens, will you still be so sceptical?
I don't believe a machine or program coming up with words itself proves it's self awareness at all. Hmm I can see I'm about to start a huge rant again... better not. :)

Very interesting discussion, though. I'd be very interested to discuss it more deeply and really get to the bottom of it.
 
Originally posted by nbk-brando
This is in honor of the thread that was unintentionally hijacked (by me) here:

http://www.halflife2.net/forums/showthread.php?s=&threadid=10154&perpage=15&pagenumber=1

The ending discussion was, basically, what it means to create AI, how it may be possible, and what are the ethics issues that we'll be responsible for? If we eventually create something that can reason and have a sense of being, and these AI's are present in games, is it ethical to kill/hurt/maim them? Or have we crossed the line at this point?

CHRIST, ITS ONE'S AND ZERO'S ...NOT A PERSON.... THINK ABOUT IT...EVEN INTELLIGENT "AI" IS JUST SOFTWARE. *sheesh* fuggin green peace hippies :p
 
Originally posted by MrD
I never said this wasn't the case.

I only argue that self awareness is a side-effect of intellectual development. For example, think about what humans and dogs have in common :

1) when born they are basically "blank" (intellectually speaking)
2) at ABOUT THE SAME AGE they can both understand basic human words or actions ("NO!", "BAD!" etc.)

* --- this is about as far as a dog can go

3) Child then develops enough to understand basic speach, and to talk back. At this age you cannot explain what death is, except in terms such as "gone away to a better place". They won't understand the raw concept.

4) Child eventually develops enough to become "self-aware" as we understand it, and the concept of death makes sense to them.

How can you argue that an infant is not self aware, simply because it is too young mentally to express it's awareness? You seem to be fond of the idea of if you can't prove it's there, it's not. I believe that infants are self aware. Obviously their intellect is far less developed, and they don't understand what they are percieving yet, but they still percieve it. I actually also believe that saying that humans are the only self aware species is rediculous. We have a more developed and complex intelligence, but how can it be argued that animals are not self aware? It certainly can't be proven false, even if it can't be proven true either.

What this debate needs is... structure.... at the moment it's just an argument... we need to have a formal debate, each team (or person) picking a side, and arguing that side with everything they can come up with.... hmm... lets organise a strictly regulated (as in no discussion, only debate, following the debate rules) debate thread in off topic or something.. that would be a lot of fun....
 
Originally posted by Logic
Very interesting discussion, though. I'd be very interested to discuss it more deeply and really get to the bottom of it.

Indeed :)

I have read your post, and I could counter-argue some of the points again, but I feel we will go round in circles!

So, a new tact ...

In a nutshell I believe that our "self-awareness" is such a fundamental part of us that it would be impossible to reproduce accurate human behaviour without it (I'm certain I would notice if I was talking about death to someone who is not self-aware). Thus, if we ever do produce fully accurate human AI we will have produced self-awareness in the process (regardless of how it is done).

If that is true, then games can never become truly realistic because we would have to create "life" (as we understand it) in order to fulfil that goal. And, surely we must agree, it would immoral to go about killing real people a la GTA.

And if we also agree that self-awareness is a gradual thing (ie. cats and dogs are self-aware to a small degree), and not just "switched on" suddenly, then there will come a point when someone must decide to go no further.
 
Originally posted by Logic
How can you argue that an infant is not self aware, simply because it is too young mentally to express it's awareness?

How can you argue that a machine is not self-aware, simply because it is made out of 1s and 0s? I argue it because I believe it, as do you!

I use "self aware" very liberally. Its not that I think an infant has no self-awareness, I just don't believe that it is developed anywhere near that of an older child. Small children, for example, have no problem running around naked on the beach. This I believe is some element of self-awareness since without it we would not have dignity, or worry about our image.

Most people argue that animals are not "self-aware" but that all depends on your definition. There is certainly a level of difference between animals and humans, just as there is between plants and animals. Clearly there is some scale of "self-awareness".

Originally posted by Logic
What this debate needs is... structure.... at the moment it's just an argument... we need to have a formal debate, each team (or person) picking a side.

Not my cup of tea, sorry.
 
Back
Top