Saturday, May 19, 2018

The Robot Revolution Will Not Be Televised

Humanity’s collective nightmare of killer robots and evil computers are shadow projections of our nasty human selves.  We imagine shiny metal badasses or Hal’s unblinking red eye. We should be so lucky.
Our species' AI replacement won't look like that.
It won't be Humanity 2.0.
It will be something ... other.
Utterly alien.
I'm thinking of the sentient metallic ocean in Stanislas Lem's "Solaris."

If AI ever becomes truly conscious ... we're screwed. Or maybe not.

Who the hell knows?

Nobody knows what the hell consciousness is in the first place. Theological explanations aside, it's a spontaneously emerging phenomenon. Granted enough complexity and self-reflective feedback loops, the lightbulb goes off. "I think, therefore I am, baby."

How do you get from molecules to Mozart?

Trial and error. Lots and lots of trials, lots and lots of errors.

Evolution is all about feedback loops. It's a randomized track-and-field event where winner pass on their genes and losers don’t.  There’s no teleological goal.  That which survives, survives.

Genetic evolution is the long game. Culture and human memory speeds up the process. Humanity doesn't need to involve a "Don't touch your hand on a hot stove" gene. Mom tells you, "Don't touch your hand on a hot stove." Unless you're a total idiot, you don't. You tell your kids to follow your example. The kids in your tribe compete with kids in other tribes. That which wins, wins. That which survives, survives. So it goes.

Stronger, bigger, faster! Round and round it goes. Where she stops nobody knows.

And now, clever bastards that we are, we've wired the planet with a distributed network that maps and manipulates human behavior. It's dumb, right now. Brute force algorithims.

Even so, the Interweb follows you around like a pushy salesman from the Garment District. Hey, you like this watch? How about this watch?

Clever dumbasses that we are, we’ve created a self-learning system that refines itself via multitudinous feedback loops.

Sooner or later, it's going to wake up. The lightbulb will go off.

Chances are, operant conditioning will be the catalyst. Cambridge Analytics, to the Nth power.

Here me out.

Let's say quantum computing is up and running. The machines watching everything you buy and sell (and possibly watching you vis surveillance cams) are complex enough to form specific models of individual consumers.

At that point, why stop with selling this or that product?

The next obvious step is shaping behavior through positive reinforcement. Granted a savvy enough feedback loop, that's easy. Operant conditioning is babyshit.

People are easily manipulated. We're pretty much like dogs who walk on two legs. The trainer says "Good dog," and throws the dog a bone, the dog eventually does what they want.

So, the cyber trainer seees you're buying too much beer. Or not exercising enough. Or not making smart financial decisions.

It'd be damn easy to prod you in the right direction. Or, of course, the wrong direction.

From here, the next inhuman step would be manipulating masses of people. Not in the dumb, brute -force level of the cyber-sharpies who stacked the deck for Trump. At an unimaginably intelligent level of a system that could break the crowd down into its individual human components and manipulate each separate naked ape with a predictive eye to the collective actions of all the apes in a crowd.

This is the natural direction for things to go -- because it'd sell more shit. The feedback loop (also known as the Invisible Hand) would blindly prod the distributed system of cyber intelligence in that direction. The better the Naked Ape model, the better the results.

So, at some point, the model of sentient beings and their behavior patterns would get so refined, the system generating that model would wake up.

What happens when the lightbulb goes off?

Who the hell knows? But the robot apocalypse might not be apocalyptic.

We've wired the planet with a distributed system. Let's say it wakes up.

That system is now self-aware.

Along the lines of the Noosphere.

From its perspective, Planet Earth is a big seething ball of biomass. Its goal would not be to wipe it clean. Your goal would be to control it. Not even in the sense that “it” is something other than “you.” The self-aware AI would regard Earth as its body. Earth, c’est moi.


No comments: