Scary.

[/Scary Voice] In the future, there will be giant robot wars. [/end Scary Voice]
 
AI Warfare's the future. A human body can only sustain so much and a human mind can only calculate so many variables. I say the greatest achievement of the human race will be super AI. AI>Human intelect
 
Yeah...but guys, doesn't this sound an awful lot like Skynet or some shit?
 
The greatest achievement the human race can hope for is not commiting suicide.
 
Humanity will be inferior to future AI. Then it is the AI that will make achievements far greater than mankind could ever hope to reach.
 
fizzlephox said:
AI Warfare's the future. A human body can only sustain so much and a human mind can only calculate so many variables. I say the greatest achievement of the human race will be super AI. AI>Human intelect

Super AI is still a long way off though I think, right now computers can only make very basic decsions on their own.
 
fizzlephox said:
Humanity will be inferior to future AI. Then it is the AI that will make achievements far greater than mankind could ever hope to reach.

And then we will all die, woot.
 
"The X-45A was preprogrammed with the target coordinates and used the satellite-based Global Positioning System to adjust its course."

We told it where to bomb and it did the rest. There's not really any independent AI work going on here. When they take it another step further and make the planes capable of bombing targets completely on their own...that's where we might run into problems... I doubt that we'll see that for a long time and I still doubt anything like Skynet would ever be created.
 
Yeah, us people are all too paranoid. :D
Also, it is good to see more and more men being replaced by this sort of thing.
 
Direwolf said:
Yeah, us people are all too paranoid. :D
Also, it is good to see more and more men being replaced by this sort of thing.

I'm all for keeping people safer...It's just that maybe giving guns and bombs to robots isn't really the smartest thing in the world.

AmishSlayer said:
We told it where to bomb and it did the rest. There's not really any independent AI work going on here.

It's true that it was just given some GPS coordinates and told what to do, but that really doesn't assure me much. I mean, would you fly on a robotically controlled plane? Even if it just was told where to go and had no independent AI?
 
Letters said:
The greatest achievement the human race can hope for is not commiting suicide.

:laugh: :laugh: new sig.

And as for AI, I think cautionary paranoia would be a good quality to possess when developing a machine more capable than a human.
 
All they're doing is making it so that they dont need an onboard pilot to fly bombing missions against known targets.

Right now, what a pilot does is fly the bomb to the target area, point a laser designator at a spot, drop the bomb and wait until it explodes. There is nothing in that description that a machine cannot do perfectly well.

What will probably happen is that a bunch of operators will sit in a bunker with a dozen or more robot airplanes each, review images sent to them from the UAV to determine if the target is there and then give the go-ahead.
 
DarkStar said:
It's true that it was just given some GPS coordinates and told what to do, but that really doesn't assure me much. I mean, would you fly on a robotically controlled plane? Even if it just was told where to go and had no independent AI?

Yes I would...because the government would never allow that sort of thing to go public without years and years of testing.

EDIT: To clarify... airliners being flown completely by an onboard computer...that's what I'm talking about.
 
IonGorilla said:
Super AI is still a long way off though I think, right now computers can only make very basic decsions on their own.

I'd hardly say today's computers can make even basic decisions. They follow programs mostly with a bit of randomness thrown in. It will be a long time I think before any sort of actual decision making capability is made.
 
AmishSlayer said:
Yes I would...because the government would never allow that sort of thing to go public without years and years of testing.

EDIT: To clarify... airliners being flown completely by an onboard computer...that's what I'm talking about.

Ah, but when's the last time you went and looked in the cockpit? Maybe they're already doing it and you just don't know. :P
 
Uh, the U.S. military has had that capability for several years now.

Edit: sry, didn't read the autonomous bit. But even so, isn't this what cruise missiles do?
 
It's not the problem if we create AI that is superoir to us... it's all a matter of controlling it.

It takes a real coy son of a bitch to control something substancially superior to himself.
 
All you'd need to do is influence the AI's psychology permanently before boot-up to make them completely subservient to humans... and if we can at any point in the future make sentient AIs, then we could easily do that...
 
This is better than a cruise missle simply because a full fledged plane can carry more than one, is re-usable, and is much less expensive.
 
Who says computers will turn on us? Why would they do it? So long as we dont program it into them, then they wont blow us up out of some sense of superiority or even the need for survival. The only way we will die by the hands of a robot, is when some guy has told it to do so...They will never "evolve" beyond us or anything like that. It is impossible to make something inherrintly better than you, perhaps robots will have abilities we dont but they will not have the capacity to destroy us without human influence. They are, and always will be tools...
 
Independent AI.

They will not turn against us. They will see us as their fathers. Even if if we turn against them, they will only eliminate the ones who do, that is, they will always allow the human race to endure as they will want to preserve us as we want to perserve the chimps.

They will be highly logical, therefore they will not exterminate us.
 
Bah, things like this scar me because human casulties are pretty much the only thing that stops the US government going to war at the moment. With that out of the way I think there will be alot more wars. And that is rarely a good thing.



Please note. I supported the Iraq war because even if I don't believe it was started for the right reasons I do believe there have been good effects.





Please dont flame me for my opinions. I realise this sounds bad because it sounds like I wan't people to die or something. But in fact I wan't the oposite.
 
That is a problem of a different sort though. I think any possible psychological impacts of something like this are very minor when compared to the to good effects it would have.
Not that we're overly worried about our pilots anyway, air superiority has been the name of the game for us for a long time now.
 
I suppose its for numerous reasons they make these things...

I for one wel...damn its been done.


But really i think this is certainly the best way forward. Well, apart from not needing them anymore that is...who knows, we may just stop war on earth. Those evil commie nazi aliens that keep stealing morons and cattle are just around the corner.
 
fizzlephox said:
Humanity will be inferior to future AI. Then it is the AI that will make achievements far greater than mankind could ever hope to reach.

Let's just hope they say "Thank you for life." As opposed to annihilating or enslaving us.
 
Farrowlesparrow said:
Who says computers will turn on us? Why would they do it? So long as we dont program it into them, then they wont blow us up out of some sense of superiority or even the need for survival. The only way we will die by the hands of a robot, is when some guy has told it to do so...They will never "evolve" beyond us or anything like that. It is impossible to make something inherrintly better than you, perhaps robots will have abilities we dont but they will not have the capacity to destroy us without human influence. They are, and always will be tools...

Computer can't turn on us in their current form, but what if we're talking artificial intelligence here? If we created a true artificial intelligence than we are no longer controlling it through programs. Thus it could turn on us if it wished to. True we could hard code certain actions into it so that it wouldn't but programs don't always work perfectly. As for having the capacity to harm us, why not? Many of our critical systems are based on computers so if you had an A.I. with access to these it would certainly have that capacity. Also, who's to say they can't "evolve"? What if A.I. computers start building new models and improving on the design? Isn't that a form of evolution.
 
Back
Top