When algorithms control the world

Remus

Companion Cube
Joined
Mar 9, 2006
Messages
7,919
Reaction score
45
http://www.bbc.co.uk/news/technology-14306146

I remember a while back someone posted here an article talking about the google filter bubble, well lets just say the problem doesn't stop there.

If you were expecting some kind of
warning when computers finally get smarter than us, then think again.

There will be no soothing HAL 9000-type voice informing us that our human
services are now surplus to requirements.

In reality, our electronic overlords are already taking control, and they are
doing it in a far more subtle way than science fiction would have us
believe.

Their weapon of choice - the algorithm.
 
That was me. Cool article. And the intro is right: this is far scarier than HAL because he (it?) at least had a personality.
 
Until the day we create the technological singularity, machines will still be built by and confined to the will of humankind.
 
Interesting but,

British firm Epagogix is taking this concept to its logical conclusion, using algorithms to predict what makes a hit movie.It takes a bunch of metrics - the script, plot, stars, location - and crunches them all together with the box office takings of similar films to work out how much money it will make.
The system has, according to chief executive Nick Meaney, "helped studios to make decisions about whether to make a movie or not".

as far as I'm concerned that's just using evidence to help predict what will make a good film. Rather than signaling an end to any human involvement it only quantifies peoples reactions to certain aspects of films, it's certainly not worthy of the concern this author seems to place on it.
 
^Agreed.
The facts are enough to report by themselves. No need to go overboard with the whole "controlling the world" bit.
Even the stock market part is just someone failing to foresee a certain scenario and therefore not programming for it. Not the algorithm controlling the world. It's not like they can't go back and figure out the programmed logic that led to that accident.

Also this whole bit is just ridiculous: "We are running through the United States with dynamite and rock saws so an algorithm can close the deal three microseconds faster, all for a communications system that no humans will ever see." They make it sound like we are physical slaves to some algorithm, when really this is merely a symptom of our desire for wealth / a means of increasing someone's chances of making money. The algorithm is not running the show. The businessman is.
Corrected: "We are running through the United States with dynamite and rock saws so an algorithm can close the deal three microseconds faster, all for a businessman/corporation that none of these workers will ever see."

"It might be time to work out exactly how much they know and whether we still have time to tame them."
More ridiculous wording.
Corrected: "It might be time to work out exactly how much information is being collected and used by Google/FB/whatever, and whether we can have the foresight to improve stock market algorithms and/or implement safeguards to prevent further accidents."
How does one even "tame" a program? The only reasonable thing I can infer from this, is that we should increase human operator oversight. Like how engineers use decision support or risk assessment programs, but if it comes up with a bogus answer, the engineer can override this using his expertise. If that's what they're trying to say, then they should just say it and not sit around fear-mongering.

tldr: frickin' journalists
 
Bizzarely written as if algorithms are actually intelligent, but I don't think the concerns are unjustified. dfc05, you say it's ludicrous to suggest people are "slaves to some algorithm", but people are slaves to all kinds of things that aren't human - ideas, infrastructure, bureaucratic practices. These things have a certain life of their own that is bigger than any one person and often outside the capacity of one person to stop or control. In this case, the danger is that it's extremely tempting for the people who run businesses to rely on algorithms because of the money saved in wages, or because of the speed, or because of the convenience, or quite possibly because of the sheer intellectual attraction of a 'perfect system'. And that's where problems begin because algorithms will always have gaps, will always overlook things, will always overvalue some elements and undervalue others - or certainly will not be capable of distinguishing between and balancing different kinds of value than those programmed into it. Precisely because they are not conscious they also don't have the understanding necessary to realise when they're doing something completely bonkers that their designers never intended. Even a simple common-sense check is at bottom an artificial fix which might not fully anticipate the problem. And still they will doubtless be used by business to increase their margins because businesses have often shown themselves vulnerable to short-term logic. When algorithms start interacting with other algorithms, the situation gets more complicated and any problems multiply by virtue of echoing through a series of imperfect metrics...

And then you get this, which is a rather funny, harmless example of the whole phenomenon.

http://www.michaeleisen.org/blog/?p=358
 
Ah, good explanation. In that respect, I would agree. When I saw the word "control," I just took the connotation of a sentient being imposing their will on something, and then took everything else in the article as examples to fit that meaning, e.g. the fiber optic cable line connoting that the algorithm is out there being all like "give me moooore speed" and making us install fiber optic cables. Same with words like "tame," which I took to mean like taming a sentient being, e.g. a wild animal.

But now I see how these could have been meant in a less sensationalist manner, as algorithms just running unexpectedly. I do find all those examples interesting, just wish they'd been presented as straight-up boring facts.
 
"We are writing these things that we can no longer read," warned Mr Slavin.

"We've rendered something illegible. And we've lost the sense of what's actually happening in this world we've made."

and then i stopped reading.
 
Reading the actual article, as I have only just done, I think Mr. Slavin, whoever he is, is the victim of selective quoting and the failure of text to convey tone. Judging by my knowledge of TED talks I suspect that in the flesh he was a lot more loose and humorous in speaking what sound, when set down in letters, like solemn predictions of doom.
 
Reading the actual article, as I have only just done, I think Mr. Slavin, whoever he is, is the victim of selective quoting and the failure of text to convey tone. Judging by my knowledge of TED talks I suspect that in the flesh he was a lot more loose and humorous in speaking what sound, when set down in letters, like solemn predictions of doom.
No, it was just a particularly bad TED talk. Wishy-washy, lacking in specifics and general FUD about computer intelligence and algorithms. You can watch it for yourself, but be warned that you will gain nothing.
 
Back
Top