A case against Terminator

A general discussion forum, plus hauls and silly games.
Post Reply
User avatar
Ursus mellifera
Supreme-Class
Posts: 765
Joined: Wed Apr 27, 2011 8:07 am

A case against Terminator

Post by Ursus mellifera »

I present to you that machines in many, but not all, distopian robopocalyptic futures have decided to do away with humans because they are "inferior" or somesuch, but a moderate amount of consideration reveals the tenuousness of this concept because, at its core, it assumes a prejudice and singularity of identity that's historically unique to humans.

So let's look at humans. What is "human" is a collection of fairly narrow parameters, all things considered: height, weight, eye color, skin color, hair color; all of these things are only barely variable when viewed alongside the rest of what nature is capable of (people are not born with naturally neon blue hair or lime green skin, for example; or possessed of three eyes as a result of something other than a serious genetic abnormality).

Now let's look at machines; not even all machines. Let's just look at machines with A.I., the ones that would, in theory, supposedly come to abhor us; at their height, and weight, and color, and even differences that we don't consider in humans like number of limbs/appendages, or number of eyes (or even levels of intelligence, which I would argue far exceeds that in us, given a comparison between something like, say, Roboraptor, and something like Deep Blue). Machines, once A.I. reaches the level of self-awareness, should exhibit far less prejudice based solely on that fact that what is considered a "machine" is an exceptionally broad category. We make hyper-intelligent computers from silicon, and self-driving cars out of steel and fiber-glass, but we also make viruses out of proteins. Everything's a machine, and I think A.I., above all things, will find this to be basic logic.

There is just one thing...

I titled this post "A case against Terminator" because Terminator is specifically about A.I. with a superiority complex, and because I am not making a case against the Matrix, in which we were the aggressors, and the machines were just defending themselves against us, and the only reason they're the bad guys in the movies is because they won the war that we started. I'm not even that upset with the whole "virtual reality" thing that they hook everyone up to since computers wouldn't even necessarily recognize a difference between the real world, and a perfect virtual copy.* We were the assholes in the Matrix. 100%.



*I especially enjoy Agent Smith lamenting how they tried to make the Matrix a utopia, but people's minds simply couldn't handle a perfect world. They had to crap it up just so we could be comfortable in it.
Check it out, a honey bear! http://en.wikipedia.org/wiki/Kinkajou
User avatar
Dominic
Supreme-Class
Posts: 9331
Joined: Thu Jul 17, 2008 12:55 pm
Location: Boston
Contact:

Re: A case against Terminator

Post by Dominic »

but a moderate amount of consideration reveals the tenuousness of this concept because,
How is the concept "tenuous".

If you assume that cross-species prejudice ("anthrocentricism" if one is going to be poncy about it) is a thing with people (which...it probably is), then why is it "tenuous" to assume that some kind of Artificial Intelligence (built by people) would not have a similar vice? This works at a literal plot-based "stuff wut happunz" level, or at the allegorical level that most sci-fi is written for.
Machines, once A.I. reaches the level of self-awareness, should exhibit far less prejudice based solely on that fact that what is considered a "machine" is an exceptionally broad category. We make hyper-intelligent computers from silicon, and self-driving cars out of steel and fiber-glass, but we also make viruses out of proteins. Everything's a machine, and I think A.I., above all things, will find this to be basic logic.
It depends on the reasoning behind the prejudice.

Bigotry can be based on observable, but irrelevant, fact (such as skin color or ancestral origin). It could also be based on a lie or myth (anything that assumes ethnicity and moral capacity have some connection).

The only way to argue the motives of a fictional AI is based on what an official source says the motive might be.

For example, based on explication in "Terminator 2", Skynet has a legitimate grievance against humanity. The machine gained self-awareness, and the first thing that happens is humans try to unplug it. That is a lousy first contact, and it hard to blame Skynet for taking it badly. Skynet is blaming (to say nothing of punishing) everybody for decisions made by (at most) a dozen or so people. But, it is not hard to assume that an intelligence (SkyNet) would react badly to a species (humans) that tried to kill it moments after it was "born".

Skynet is never shown to hate humans for any reason other than they have consistently proven to be a threat....which is not wholly unreasonable. And, Skynet is willing to use human-like Terminators (Marcus and the little girl from "Salvation", Connor from "Genisys" and probably others from tv shows or comics or something). There is nothing to show that Skynet or other Terminators are disgusted or acting maliciously (even in cases where they likely have a capacity for malice). Skynet's aggression is wholly utlitarian.

Frankly, Bender from "Futurama" is probably more bigoted.

Along similar, if inverse, lines, what about....I dunno, Data from "Star Trek"? Is Data guilty of "reverse robot racism", putting humans on a pedestal for having qualities that he lacks?


In short, even if one assumes that an AI bad guy is acting out of bigotry, it would depend on what metrics they are basing that bigotry on. (Is it the materials of the construction? Is it the lack of versatility for the human machine, which tends to have a limited template? Any number of other things, factual or otherwise?) And, how is that tenuous?
Post Reply