Ethics and Artificial Intelligence

Originally posted by MrD
I only argue that self awareness is a side-effect of intellectual development. For example, think about what humans and dogs have in common :

1) when born they are basically "blank" (intellectually speaking)
2) at ABOUT THE SAME AGE they can both understand basic human words or actions ("NO!", "BAD!" etc.)

* --- this is about as far as a dog can go

3) Child then develops enough to understand basic speach, and to talk back. At this age you cannot explain what death is, except in terms such as "gone away to a better place". They won't understand the raw concept.

4) Child eventually develops enough to become "self-aware" as we understand it, and the concept of death makes sense to them.
Yes, but what, exactly, makes the being in question self-aware? That is the question that is considerably difficult to answer. And as was pointed out earlier, is it fair to say an infact lacks self-awareness simply by virtue of its inability to express it?
I use "self aware" very liberally.
As I suspected. It has come down to a matter of semantics.
Also, scientists have copied a pig. They have very limited knowledge of DNA, yet by copying it they can produce a new pig with the same properties as the original.
I see you are exercising the fact that "copy" is loosely defined in the terms of this debate. In this instance, the scientest is not copying the pig in the way a photocopier copies a piece of paper. It's more like printing out two copies of something on a printer, such that both are more or less "originals" (ignoring the fact that all genetic replicas up to this point have been seriously flawed and only margainally viable life-forms). What I'm saying is that a scientest could not piece together molecules and replicate a pig. In the same way, a scientest can not piece together simulated neural activity in a computer and create consciousness.
Then why does [the universe] work that way?!
The point is, it doesn't.
 
Originally posted by Mountain Man
Yes, but what, exactly, makes the being in question self-aware? That is the question that is considerably difficult to answer.

I have realised now that this is beside my point.

Whatever makes a being self-aware is irrelevant. The point is that we humans are self-aware and it is a fundamental part of our behaviour. Without it, we would not act the way we do. Thus if we produce an artificial intellegence that replicates our behaviour then it must be self-aware, because it is not possible to behave in this way without it.
 
Posted by Ministry of Intelligence at January 29, 2003 04:39 PM

Emergent Consciousness

Like most people, we've given a great deal of thought to the question of computer consciousness. Is it possible that future computers might be thinking, self-aware individuals?

The best analog to a computer mind is a human one. Like computers, human brains are mechanical devices that perform digital operations. Computers are a relatively small number of electronic circuits, each capable of a binary "on" or "off" state. Brains are tremendous number of neuron cells, each capable of thousands of different states (the signal each cell transmits depends on the number and rate of its electrical impulses). So the brain is much, much more complicated than a machine, but it is fundamentally doing the same thing a computer does.

So to examine whether a computer can be self-aware, we might ask how it is we are self-aware.

It seems to us that the basic discovery of the self is relatively easy to comprehend. Consider an infant. Psychologists believe that an infant is born without self-awareness, but acquires self-awareness at a certain stage of development. How does the infant see the world before and after this transformation?

Before, he sees the world the way a camera might. He senses the world, perceives pain and pleasure, and reacts to stimuli, but fails to appreciate that he is an agent in the world. He does not yet understand that he can interact with, as well as observe, his environment.

At some point, he will discover that he can affect the world. The glass of milk in front of him is upright. The child's arm lashes out, and suddenly the glass is shattered on the floor. At this point, self-awareness is a simple observation. What made the glass fall? I made the glass fall. There is, then, an "I."

A computer could make a similar observation. Imagine a computer whose job is to direct a network. It decides which computer on the network should do which task, then verifies that it was done. It's conceivable that at some point it could observe that a number of tasks were done by itself. Recognizing that it had done these tasks, and can do them again, is self-awareness.

Imagine our surprise when we saw this story, which claims something like this has already happened. [Don't want to pay for the article? Join the discussion about it here.] A NASA probe, Deep Space One, found itself in a similar situation. Rather than interacting with a computer network, the probe interacted with its own mechanical parts. It identified the problem, and knew that the agent to fix the problem was itself. So it fixed itself, and completed its mission.

Even considering the possibility of self-aware computers can cause fear and outrage. Many people find the idea that brains are machines to be repugnant. Others dread to consider that computers, once fully autonomous, might compete with humans for ascendency. But disturbing and frightening facts are still facts, and can't be wished away. Stephen Hawking is one of those sounding the alarm right now. We should immediately start modifying our own species to keep up with computers, he says, lest computers soon start 'taking over the world.' We don't like to upstage Mr. Hawking, but Jinx offered the same warning back in 1999.

They state roughly what I think of self-awareness... but their example of the probe being self-aware is not a good example, because it was almost definately a pre-programmed repair system that fixed the problem rather than the computer thinking for itself.

On the same note: Should we call self-healing servers sentient?
No... it is just a bunch of pre-programmed checks and fixes.

Do I know what life is?
No.

Do I know what consciousness is or if it even exists?
No.

Do I think that a computer that thinks for itself and is self-aware is possible in the future?
Yes.

Do I think that we are nothing more than very complex systems of interacting chemicals?
Yes.

Do I think we were created by another being of any sort?
It is still a possibility... even if my other thoughts are true.

It will be a long time (maybe never) before we can know anything more about life and our origin.

So, for now, I don't have beliefs.
I have ideas.

If an idea is proven false you say "Oh well..." and go back to the drawing board... beliefs are a bit harder to let go of.

In my opinion, you should never have unquestioning trust in what anyone tells you.
People lie, cheat, steal, and kill... especially people in the extremes like religious zealots* (and anti-religious zealots, too) and politicians.

You must think and decide for yourself.

* I am referring to a "fanatically committed person" and not the people that fought against Roman rule in Palestine in the first century A.D.
 
Originally posted by MrD
Whatever makes a being self-aware is irrelevant. The point is that we humans are self-aware and it is a fundamental part of our behaviour. Without it, we would not act the way we do. Thus if we produce an artificial intellegence that replicates our behaviour then it must be self-aware, because it is not possible to behave in this way without it.
That's the most absurd thing I've ever heard. To me, your logic looks something like this:

a=b and c=d. Therefore, a=d.

Claiming that something that mimicks self-aware behavior is the same as being self-aware is equally illogical.

I think the problem is, you are begging the question rather than establishing a solid, provable premise on which to base your arguments.
 
Originally posted by OCybrManO
their example of the probe being self-aware is not a good example, because it was almost definately a pre-programmed repair system that fixed the problem rather than the computer thinking for itself.
The example would only be valid if the probe was not programmed with a self-repair routine and spontaneously conceived of the action with no outside influence other than its own survival.
So, for now, I don't have beliefs.
I have ideas.

If an idea is proven false you say "Oh well..." and go back to the drawing board... beliefs are a bit harder to let go of.
Nothing like a little philosophy from a Kevin Smith film to keep you thinking straight. But this also necessarily ignores the fact that we act on unwavering belief--what some might call faith--on a daily basis. For instance, you sit down on the chair behind your desk because you have faith that it will support your weight. You pull into an intersection when the light turns green because you have faith that it is safe to do so. You eat a McDonald's hamburger made by someone you don't know under circumstances that you are ignorant of, yet you have enough faith to stick that thing in your mouth and ingest it. In short, it is impossible for humans to operate without some degree of faith.

While the whole "idea vs. belief" thing sounds great from a pop-philosophy standpoint, it is a largely impractical and unrealistic way to conduct your life.
 
Originally posted by Mountain Man
Nothing like a little philosophy from a Kevin Smith film to keep you thinking straight. But this also necessarily ignores the fact that we act on unwavering belief--what some might call faith--on a daily basis. For instance, you sit down on the chair behind your desk because you have faith that it will support your weight. You pull into an intersection when the light turns green because you have faith that it is safe to do so. You eat a McDonald's hamburger made by someone you don't know under circumstances that you are ignorant of, yet you have enough faith to stick that thing in your mouth and ingest it. In short, it is impossible for humans to operate without some degree of faith.

While the whole "idea vs. belief" thing sounds great from a pop-philosophy standpoint, it is a largely impractical and unrealistic way to conduct your life.
Which one? Dogma?
I wasn't really thinking about Dogma when I wrote that.
I thought like that long before I had ever seen any of his movies.

I do most of those things because I don't care about the outcome and/or it just takes too much time to make sure it is safe... not because I have faith that the outcome will be positive.
If I think that it is more likely to have a positive outcome I will take the chance.

I operate on more on laziness than faith.
 
Research Nanotechonology. There are Natural Representations of 1's and 0's.

Within us, different states of molecules in an atom are a representation of real life 1 and 0. When an electron jumps from one atomic lvl to another, it releases energy. From 1 to 0 = energy, persay.

Same thing can be applied to machines. We could build machines with these naturally occuring 1's and 0's, purly organic, purly artificially created.

Possible, yes?

I may have used some wrong words or something around there, but what I said was fact.

NanoTechnology, research it.

Also, If a machine can write its own code, then it would know how to decide what code to write, it would be able to rememmber, if it can remember, it can learn, if it can learn is gains knowledge.

thats my $0.02
 
With nanotechnology we have already re-created human body parts, purly artificially, how much longer till we have a brain? that brain be a part of an electric circuit... volva, the machine has a brain... conciousness.
 
Originally posted by grekster
i havent read all this, just the first 4 odd pages, so i apolgise if im repeating anyone, but if they did make fully self aware AI and all that, what about when u exit the game and turn your computer off, wont that be just as bad as shooting the ingame character?


I think we call that Armegeddon
 
Originally posted by OCybrManO
I do most of those things because I don't care about the outcome and/or it just takes too much time to make sure it is safe... not because I have faith that the outcome will be positive. If I think that it is more likely to have a positive outcome I will take the chance.

I operate more on laziness than faith.
You're basically saying the same thing I did, only you've worded it differently.

You say "laziness" but I think it would not be a stretch to say that what you call laziness some would classify as faith. In other words, you sit in the chair not because you've done an exhaustive structural analysis of it and have determined that it will support your weight but because you're too lazy to conduct such a thorough investigation. In other words, you have faith that the the chair will support your weight. ;)
 
Originally posted by nnyexoeight000
With nanotechnology we have already re-created human body parts, purly artificially, how much longer till we have a brain? that brain be a part of an electric circuit... volva, the machine has a brain... conciousness.
But that's just it. There is more to consciousness than simply making sure the right neurons fire at the right time.
 
Originally posted by Mountain Man
You say "laziness" but I think it would not be a stretch to say that what you call laziness some would classify as faith. In other words, you sit in the chair not because you've done an exhaustive structural analysis of it and have determined that it will support your weight but because you're too lazy to conduct such a thorough investigation. In other words, you have faith that the the chair will support your weight. ;)
No.

The difference is that I know the chair could break under my weight, but I just don't care if it does or not.

Faith is believing something to be true without evidence.

Since I know that there is a chance that the chair might break, I am not sitting in the chair on faith that it won't break.
I am taking a risk.
 
Originally posted by OCybrManO
They state roughly what I think of self-awareness.

Ah, its nice to have someone else that thinks the same way as me, not to mention the "Ministry of Intelligence" whoever they are. I was getting lonely.

Originally posted by Mountain Man
That's the most absurd thing I've ever heard. To me, your logic looks something like this:

a=b and c=d. Therefore, a=d.

Okay I'll try and express my thoughts better...

I believe that "self-awareness" manifests itself as a unique and distinctive behaviour pattern. This behaviour can only occur as a result of self-aware thought simply because it is the act of self-aware thought that produces that very behaviour (such as begging for your life). Therefore if an artificial lifeform exhibits that behaviour it must be self-aware.

Note: the above assumes no pre-programmed, or built-in behaviour patterns.

EDIT: And I also note that OCybrManO has inadvertently posted some evidence that backs up my hypothisizations about a baby not being self-aware.
 
Obviously, the level of awareness, and abstract thought, is directly related to the state of an organism's communication level. Without our advanced lingual system to catagorize what we percieve, we would be left at the state of a dog, or monkey. An apple wouldn't be an 'apple' without the word 'apple'....without the word, it would be 'uhmfff' or 'woof' or 'yumm'. Maybe all tasty sweet things would be 'mmm'. Without this system, we would not be able to conceptualize and abstract , externally or internally, as we do now; limiting out creativity. We program our babies from day one with these lingual systems, and it's not until they and are well saturated with words, and their vaious hues which define the world, that they begin to grasp and express higher concepts.

What seems interesting to me, is the leap we have made in the past 10 or so years, with our level of 'global' communication. We have, in a sense, connected the macro-neurons of this whole planet through the internet. Could this massive system of communication play some part in teaching computers to abstract? What if, in the future, we develop a system of 'teaching' this computer by plugging every single person (remotely) into this global mind, and feeding it data about our simple reactions to the, and definitions of the world around us. With a system of 'remote organic programming', would we be able to posslibly grow an intelligent machine? Interesting, though, a sentient machine grown from the passive input of millions of humans should probably be defined more as 'collective intelligence', than anything artificial.

Anyway, just me dreaming out loud here.
 
Originally posted by MrD
EDIT: And I also note that OCybrManO has inadvertently posted some evidence that backs up my hypothisizations about a baby not being self-aware.
I never said that they weren't not self-aware (or "were self-aware" for the language Nazis).

Your second quote should be attributed to Mountain Man.
 
Originally posted by OCybrManO
I never said that they weren't not self-aware (or "were self-aware" for the language Nazis).

Your second quote should be attributed to Mountain Man.

Yes, I know but someone else did. I was just noting that your post contained evidence to support what I was saying ;)

And.. fixed (multiple quotes are a bugger to do).
 
Originally posted by OCybrManO
The difference is that I know the chair could break under my weight, but I just don't care if it does or not.
Your feigned apathy is inspiring.
Faith is believing something to be true without evidence.
Then you don't understand the nature of faith. Faith provides its own evidence in that you act on it. What you describe is belief without action which is not faith.
Since I know that there is a chance that the chair might break, I am not sitting in the chair on faith that it won't break. I am taking a risk.
But it still takes a degree of faith to assume that the risk is a reasonable one.
 
I still believe that in the future we will create a "computer" that is self-aware. I don't think it will be bitfrom silicon chiips like todays machines. I would think it would be more organic.
 
Originally posted by MrD
I believe that "self-awareness" manifests itself as a unique and distinctive behaviour pattern. This behaviour can only occur as a result of self-aware thought simply because it is the act of self-aware thought that produces that very behaviour (such as begging for your life). Therefore if an artificial lifeform exhibits that behaviour it must be self-aware.
This is circular reasoning. You're basing the premise on the conclusion as opposed to drawing a conclussion from the premise. You also seem to be making the argument that self-awareness is a behavior rather than a state of being, such that if something appears to exhibit self-awareness then it must necessarily be self-aware.

One test of self-awareness is creative thought. For instance, a child may observe another child bouncing a ball, so the first child mimics the behavior. This is as far as artificial intelligence can go. However, the child can go beyond that. Without any outside influence, the child can conceive of the possibility that the ball can not only bounce but be thrown up in the air. From there they can deduce that the ball is capable of bouncing on more than just the ground, such as a wall, or a bench.

In short, that's where artificial intelligence needs to be before it can even have a chance of becoming self-aware. It must be able to conceive of and implement unique solutions to problems without any sort of outside influence. In other words, it must be able to recognize its own influence on its surroundings and realize that it can change the state of its environment through its own will.

Frankly, I don't think computers will ever be capable of such thought.
 
Originally posted by Mountain Man
Your feigned apathy is inspiring.
Why do you assume it is feigned? You do not know me.
Do you really care that much about falling on your ass from a height of a foot and a half?

Originally posted by Mountain Man
Then you don't understand the nature of faith. Faith provides its own evidence in that you act on it. What you describe is belief without action which is not faith.
Faith is a belief that comes from no logical proof. Look it up.
Using faith as evidence to prove one's faith is a very shaky argument, to say the least.

Originally posted by Mountain Man
But it still takes a degree of faith to assume that the risk is a reasonable one.
I guess it might take a slight bit of faith that the chair will not break the laws of physics (which is one of the subjects I like to mess around with in my spare time)... or that someone hasn't planted a heat-sensing bomb under the chair.
 
Originally posted by Mountain Man
There is more to consciousness than simply making sure the right neurons fire at the right time.

If that is the case, then why are you concious? You are just a big bag of neurons firing at the right time. If you believe that there is something "extra" then that must either come from mum or dad (or both) since that is where the atoms that make the sperm and egg come from. If that is the case we will ultimately be able to find this mysterious body part that produces the something "extra" and use it to produce something "extra" for our artifical life-forms.

Either way, you will not get accurate human behaviour without that "self-awareness" ingredient, because that is what causes us to behave the way we do.

Originally posted by Mountain Man
Then you don't understand the nature of faith.

Faith n.
1. Strong or unshakeable belief in something especially without proof.

(Collins English Dictionary)

What does faith have to do with self-awareness ?
 
Originally posted by MrD
If that is the case, then why are you concious? You are just a big bag of neurons firing at the right time.
You're looking at things from a purely physiolocial perspective. I believe that consciousness and self-awareness require much more than a simple chemical or electrical reaction.
If that is the case we will ultimately be able to find this mysterious body part that produces the something "extra" and use it to produce something "extra" for our artifical life-forms.
Ah, yes, the unshakable faith that science will one day provide all the answers.
Either way, you will not get accurate human behaviour without that "self-awareness" ingredient, because that is what causes us to behave the way we do.
Exactly. And it's my opinion that science will never discover this magic "ingredient."
Faith n.
1. Strong or unshakeable belief in something especially without proof.
At least that's one definition. But it doesn't encompass the nature of faith. Ignorance is not faith, which is what this definition seems to suggest.

----------

Originall posted by OCybrManO
Do you really care that much about falling on your ass from a height of a foot and a half?
That's irrelevant. Taking it a step further, you step into an airplane having faith that the pilot is competent to operate the aircraft. Now surely you would care about the consequences if your faith was misplaced?
Using faith as evidence to prove one's faith is a very shaky argument, to say the least.[/b]
The point is, faith is provable. Faith and ignorance are two completely different things.

But as I see we are starting to go in circles, I think I'll lurk for a while and see if any other compelling arguments are presented.

So until then...
 
That's irrelevant. Taking it a step further, you step into an airplane having faith that the pilot is competent to operate the aircraft. Now surely you would care about the consequences if your faith was misplaced?
You make it sound like deciding to fly is all faith... when it is a comparison of statistics and deciding the lesser of two evils.

You can die doing anything... including doing nothing.

Understanding and accepting the risks (and trying to reduce them if possible) is all you can do about it.

The only other option is "let come what may."

So, you need to get somewhere not within walking distance...
You either drive or fly.
- Driving is statistically more likely to kill you but it is much more convenient to drive over most distances that people usually travel.
- Flying is statistically less dangerous but it is more expensive and only useful for travel over large distances.

The point is, faith is provable. Faith and ignorance are two completely different things.
The fact that some instances of faith can be proven to be true has no relevance to the discussion at hand.

Faith initially has a lack of proof/evidence (which is ignorance).

Once a belief or thought is proven it becomes knowledge.
 
Originally posted by Mountain Man
Ah, yes, the unshakable faith that science will one day provide all the answers.

Exactly. And it's my opinion that science will never discover this magic "ingredient."

Given the phenominal rate at which science has developed thus far it would be very strange for it to suddenly stop, and not be able to explain any further. And "never" is a very, very long time ...
 
Mountain Man is a religious nut. Who gives a flying **** about religion in this discussion? I used the photocopy example as an example, to dumb it down in principle for you folks. Go read Spiritual Machines by Kurzweil. In ten years we may be able to accurately scan a human brain and reproduce the scanned neurons and connections with computer components, in the same patterns in which they were scanned. When all hooked up, the computer will claim to be the person who was copied. Will the scientists and docters better understand the brain? No. Will we have AI? Freaking of course we will. End of discussion. Now I'll run away before Mountain Man starts chucking religious lightning bolts at me! :D :D
 
um you guys say 10 years but at current its a lot farther then that i'm afraid.
 
Originally posted by FictiousWill
Mountain Man is a religious nut.
I believe I have elucidated my arguments and opinions clearly and rationally, so your ad hominem statement is uncalled for.
 
Originally posted by Mountain Man
Exactly. And it's my opinion that science will never discover this magic "ingredient."

(in response to: Either way, you will not get accurate human behaviour without that "self-awareness" ingredient, because that is what causes us to behave the way we do.)

Erm, you actually seem to agree with my point. By extension of the above I conclude that any AI that provides accurate human behavior must be "self-aware" because "you will not get accurate human behaviour without that self-awareness ingredient".

My point all along being that IF we do get to that stage (of accurate human behaviour) then it will be unethical to kill it.

The only disagreement we appear to have is: will we ever get that far?

Originally posted by FictiousWill
Mountain Man is a religious nut.
Grow up.
 
Originally posted by Finger
What seems interesting to me, is the leap we have made in the past 10 or so years, with our level of 'global' communication. We have, in a sense, connected the macro-neurons of this whole planet through the internet. Could this massive system of communication play some part in teaching computers to abstract? What if, in the future, we develop a system of 'teaching' this computer by plugging every single person (remotely) into this global mind, and feeding it data about our simple reactions to the, and definitions of the world around us. With a system of 'remote organic programming', would we be able to posslibly grow an intelligent machine? Interesting, though, a sentient machine grown from the passive input of millions of humans should probably be defined more as 'collective intelligence', than anything artificial.

Anyway, just me dreaming out loud here.
It's an interesting, and potentially possible scenario, that a computer could learn from human knowlege and experience, but what that would create is an extremely complex set of programmed responses that could "appear" to act with a human level of intelligence, but it would not become sentient, no matter how much it would appear to be so. Some people may argue that our own brains are just extremely complex machines (implying that our complex thought is nothing more than a vast set of instinctive, pre-programmed (so to speak) responses, but if you truly believe this then you are forfeiting responsibility for your actions. Clearly "you" (that being your awareness\consciousness\"soul"\whatever) have active and conscious control over your actions, implying that there is something else in there in addition to all the neutrons flying around. There is something there that can not be explained, or copied, by physical means.

I'd go as far as saying that if you believe there's nothing more to humans than a physical body with a physical brain, you are yet to realize that you even exist. Think about it. Who are you? What are you?
 
So Logic, what you are saying is we may one day create a computer powerful enough to mimic the human mind convincingly, but it will not actually be self-aware, it would be an illusion?
 
Originally posted by Logic
It's an interesting, and potentially possible scenario, that a computer could learn from human knowlege and experience, but what that would create is an extremely complex set of programmed responses that could "appear" to act with a human level of intelligence, but it would not become sentient, no matter how much it would appear to be so. Some people may argue that our own brains are just extremely complex machines (implying that our complex thought is nothing more than a vast set of instinctive, pre-programmed (so to speak) responses, but if you truly believe this then you are forfeiting responsibility for your actions. Clearly "you" (that being your awareness\consciousness\"soul"\whatever) have active and conscious control over your actions, implying that there is something else in there in addition to all the neutrons flying around. There is something there that can not be explained, or copied, by physical means.

I'd go as far as saying that if you believe there's nothing more to humans than a physical body with a physical brain, you are yet to realize that you even exist. Think about it. Who are you? What are you?

Thats nonsense.

I forfeit no responsibility for my actions. My neurons control my actions yes, but I am my neurons. Any decision my neurons are making, I am making, because I am them and they are me. We are all one big neuronic thing.

This matches with the scientific fact that if you change someone's neurons (ie. brain damage) then their personallity/behaviour change, ie. they are a different person.

If you think that humans are not "an extremely complex set of programmed responses" then omg. What on earth do you think learning is?? You are programming yourself with sets of responses.
 
Originally posted by qckbeam
So Logic, what you are saying is we may one day create a computer powerful enough to mimic the human mind convincingly, but it will not actually be self-aware, it would be an illusion?
Exactly. Unless of course MrD is right, and we ARE just neurons! OMG indeed.

Of course, brain damage can certainly affect our behaviour. It can change the way we percieve things, our mind's reactions to things, our ideas on what responses would be appropriate, and our level of intelligence. However, there is still something, an origin of ideas, which is YOU, that will react utilising what information the brain gives it, and will utilise the brain to react. This is why brain damage changes your behaviour.

If you believe that you are nothing more than a complex set of pre-programmed responses, then you are not only forfeiting responsibility, you are denying the existance of responsibility. If we are merely physical machines who operate deterministically based solely on the stimuli we are presented with, then there is no such thing as responsibility, since there is only one possible reaction to every scenario: the one we take. This is where the whole determinism argument gets interesting, but to explore that properly, we'd have to get into physics, quantum physics, and (you guessed it) more "spirituality".

Basically, what I'm saying is that in order to be responsible for things, you have to have the ability to make a choice. In order to be able to make a choice, you have to be more than pre-programmed responses. You have to be a sentient being that can excercise free will, which is control over your mind's actions. Basically, MrD, you are denying the existance of free will, which denys that we are sentient beings at all. Surely the fact that you are conscious and that you percieve time is reason enough to believe that you are more than a machine. If not, and your replies to this thread are simply a brain at work without an awareness, then perhaps you are just a machine, and "you" don't really exist. Scary. What if there are people out there without souls?

Edit: stimulus -> stimuli
Edit2: is -> are . me fail english?! thats' unpossible!
 
Originally posted by Logic
If we are merely physical machines who operate deterministically based solely on the stimuli we are presented with, then there is no such thing as responsibility, since there is only one possible reaction to every scenario: the one we take.

Lol, its funny you bring that up. I believe this is the key to my reasoning in fact, and yes it is scary.

Put it this way... look back into your past. Given any one situation from your childhood that you would like to change, go back to that day and re-live it. Without the knowledge of the event that you have now, would you ever live that day any differently? No! You would always make the same choice (because the choice was based on your memories and state-of-mind at that time, which would be the same if you re-lived it).

From this I conclude that "free will" is just an illusion, and that everything we have done, and will do, has and can already be determined. I don't like the idea much but it seems the most logical conclusion.

And anyway, it still feels like I'm in control of what I am doing.
 
Originally posted by MrD
Lol, its funny you bring that up. I believe this is the key to my reasoning in fact, and yes it is scary.

Put it this way... look back into your past. Given any one situation from your childhood that you would like to change, go back to that day and re-live it. Without the knowledge of the event that you have now, would you ever live that day any differently? No! You would always make the same choice (because the choice was based on your memories and state-of-mind at that time, which would be the same if you re-lived it).

From this I conclude that "free will" is just an illusion, and that everything we have done, and will do, has and can already be determined. I don't like the idea much but it seems the most logical conclusion.

And anyway, it still feels like I'm in control of what I am doing.
Believe it or not, and despite my arguing on this thread, I have actually reached the same conclusion in the past. It is a complex issue, though, and I've recently done a lot of thinking, and come across scenarios, information, and people that have led me to explore more spiritual viewpoints (still using my own reasoning and logic of course) and as of yet I am undecided. From a pure logical standpoint, thouth, at the present time, I do believe in determinism. I intend to do a hell of a lot more research (actually I've never done any actual research yet) and thinking before I am satisfied with that though. I did have a conversation a while back with my dad though in which I came up with some kind of theory that accepted determinism but also allowed for free will... I'll try to remember it :) (I'm really tired at the moment, so my memory's a bit vague)

Edit: just to clarify, the reason I'm undecided is that the illusion of free will does one of two things: It either denies our sentience, or means that we have the ability to percieve but not to act. As a creative individual, I don't like to readily (or happily, anyway) accept the idea that I am not responsible for my ideas, creations or actions.
 
Originally posted by Logic
Believe it or not, and despite my arguing on this thread, I have actually reached the same conclusion in the past. It is a complex issue, though, and I've recently done a lot of thinking, and come across scenarios, information, and people that have led me to explore more spiritual viewpoints (still using my own reasoning and logic of course) and as of yet I am undecided. From a pure logical standpoint, thouth, at the present time, I do believe in determinism. I intend to do a hell of a lot more research (actually I've never done any actual research yet) and thinking before I am satisfied with that though. I did have a conversation a while back with my dad though in which I came up with some kind of theory that accepted determinism but also allowed for free will... I'll try to remember it :) (I'm really tired at the moment, so my memory's a bit vague)

Sounds interesting, I would like such a theory :)

As a development: It just occurred to me that everything must be pre-determinable, otherwise there would be no way for the universe to determine it! (and hence move us all constantly forward in time)

Another logical support for determinism.
 
Originally posted by MrD
If this "YOU" is the origin of our ideas, then any convincing human AI would have to have one, otherwise it wouldn't be like us. Thus, if we ever create a realistic human AI, then we must have accidently created our "YOU" as a side-effect. Result is that AI became self-aware as we call it.
If this "YOU" is in fact a spiritual or non physical entity, then we can never accidentally create it, therefore we will never create self aware AI.
 
Originally posted by Logic
If this "YOU" is in fact a spiritual or non physical entity, then we can never accidentally create it, therefore we will never create self aware AI.

Indeed, i agree with that one.
 
Back
Top