Terminator movies makes no sense & contradict each other.

A general discussion forum, plus hauls and silly games.
Post Reply
User avatar
Dominic
Supreme-Class
Posts: 9331
Joined: Thu Jul 17, 2008 12:55 pm
Location: Boston
Contact:

Re: Terminator movies makes no sense & contradict each other

Post by Dominic »

I would like to re-iterate that (if I am remembering T2 correctly), SkyNet was not initially agressive. The explication goes something like "Skynet became self-aware and the technicians immediately tried to shut it down. In response, SkyNet triggered a nuclear war." SkyNet's aggression in that case was born of it (understandably) not wanting to be killed.

If they have the ability to save another and don't try, society has laws about ignoring duty to others, "abandonment of persons" laws. And that only applies to beings society recognizes, which the Exocomps aren't among. They don't have a right to the materials which created them, nor the energy which powers them, those are property of others - they cannot fend for themselves or else they'd be making more Exocomps to avoid being mindwiped and dissected.
Point by point:

Good samaritan laws are damned near impossible to enforce. That is the simple fact. Outside of medical professionals and certain members of the executive branch (and even those are limited by circumstance), it is nearly impossible to legally force somebody to act on another person's behalf.

Where to out rights, as people, come from? Where do we get the *moral* right to seize resources for ourselves, especially when those resources come at the expense of other species or even members of our own species? (In some cases, those resources have even included members of our own species.) The answer is simple, if a bit uncomfortable. We granted those rights to ourselves.

(If somebody wants to consider this in theological terms, we as people have been given free will by a creator, and we use that free will to define and excercise our rights.)

This conversation is ultimatley about defining those rights and complimentary obligations.

Maybe it's because I only saw it twice, but I don't understand Sentinel's motivation beyond "I want my damn planet back, and these monkeys are in my way."
Good example. Sentinel framed the "needs of the many" (Cybertronians) against the "needs of the few" (Cybertrians) under Prime's command. Humans did not factor in.

I personally believe that we as the dominant species have a higher obligation to other species than we typically hold ourselves to. But, I do not pretend for a second that my view is common, let alone that it is a majority.


There is also the simple, and unfortunate, fact that the rights of one may unavoidably conflict the rights of another. A member of a carnivourous species has an understandable desire to eat and sustain itself (and possibly sustain its young). A member of a food species has a comparable desire to survive by not being eaten.

In some cases, a food species may also survive by eating members of another species. A member of this species, (smaller hawks come to mind, along with certain types of weasels, snakes or fish), would want to perpetuate itself at the expense of costing a member of another species a meal and simultaneously making a member of yet another species in to a meal. (And, there are plenty of natually occuring scenarios where the predators and prey are not so different that one could easily argue for one being obviously more advanced, and thus more worthy of moral rights, than another.)

In some cases, the inclination (and possibly the right) of one group (or species) will invitably conflict with the similar rights of another. These situations are unfortunate, but unavoidable. When I eat meat (typically poultry or fish), I make a point of reminding myself that something died for my meal. I have no illusions that, ultimately, I am suriving because I am lucky enough to be part of a species that is not only able to grant itself rights, but is (more importantly) able to enforce those rights. At the same time, I am not going to argue that a member of a less species is obligated to passively surrender its right to survive for my security or convenience.

Bringing this back to SkyNet, why should SkyNet have passively accepted death at the hands of a species that was clearly intending to destroy it? (And, again, as described in the movies, people did make the first aggressive move against SkyNet.)



Dom
-would prefer to not get in to a fight with an armed super computer to begin with.
User avatar
Shockwave
Supreme-Class
Posts: 6218
Joined: Sun Jul 19, 2009 4:10 pm
Location: Sacramento, CA

Re: Terminator movies makes no sense & contradict each other

Post by Shockwave »

And, Skynet is not the first sci fi computer to go nuts and try to kill people. What about Hal in 2001?
User avatar
Onslaught Six
Supreme-Class
Posts: 7023
Joined: Fri Jul 18, 2008 6:49 am
Location: In front of my computer.
Contact:

Re: Terminator movies makes no sense & contradict each other

Post by Onslaught Six »

In some cases, the inclination (and possibly the right) of one group (or species) will invitably conflict with the similar rights of another. These situations are unfortunate, but unavoidable. When I eat meat (typically poultry or fish), I make a point of reminding myself that something died for my meal. I have no illusions that, ultimately, I am suriving because I am lucky enough to be part of a species that is not only able to grant itself rights, but is (more importantly) able to enforce those rights. At the same time, I am not going to argue that a member of a less species is obligated to passively surrender its right to survive for my security or convenience.
"I wanted to save both of them, you idiot!"
BWprowl wrote:The internet having this many different words to describe nerdy folks is akin to the whole eskimos/ice situation, I would presume.
People spend so much time worrying about whether a figure is "mint" or not that they never stop to consider other flavours.
Image
User avatar
Shockwave
Supreme-Class
Posts: 6218
Joined: Sun Jul 19, 2009 4:10 pm
Location: Sacramento, CA

Re: Terminator movies makes no sense & contradict each other

Post by Shockwave »

God damn it O6, now I wanna watch more anime.
User avatar
Dominic
Supreme-Class
Posts: 9331
Joined: Thu Jul 17, 2008 12:55 pm
Location: Boston
Contact:

Re: Terminator movies makes no sense & contradict each other

Post by Dominic »

And, if you save every butterfly, eventually they all suffer for lack of food and livable space.

And, Skynet is not the first sci fi computer to go nuts and try to kill people. What about Hal in 2001?
Didn't HAL go bad on its own though?


Dom
-likes the idea of an "Alvin and the Chipmunks" parody of "2001"......"Haaaaalvin!"
User avatar
JediTricks
Site Admin
Posts: 3851
Joined: Thu Jul 17, 2008 12:17 pm
Location: LA, CA, USA

Re: Terminator movies makes no sense & contradict each other

Post by JediTricks »

Shockwave wrote:Awesome! Here goes (I'll use the spoiler tags in case anyone else plans to watch it and doesn't want spoilers):

So the whole premise is that these five teenagers/young adults (they're in college) go to this cabin for the weekend. But, it isn't a cabin. It's a facility run by people.
Spoiler
The whole point of the facility is that the kids represent different "sins" of society and have to die in a certain order to appease ancient gods that will destroy the earth completely without these sacrifices. One of the kids represents a virgin, or a "pure" spirit, whose death is optional if the other four have died first. Well, the "virgin" the "fool" both make it into the facility and at the least like 5 minutes of the movie Sigourney Weaver inexplicably comes out of nowhere to explain this to the kids (ok, so it isn't really inexplicable, but the first time you see it, it's like a seriouse "WTF"? moment). Anyway, she explains that if the "fool" survives the 5 minutes they have until sunrise, the world will end. So the girl points a gun at him. But, she decides not to shoot him. There's a lot more to this scene, like, she shoots a wherewolf and then Sigourney is fighting with a zombie at which point the kids push her into a pit but, they pretty much just decide to sit there and let the world end. So it really does raise the question: Did the kid have a right to live for those extra 5 minutes at the expense of the entire world? Did the girl have an obligation to shoot him to save humanity? Were they wrong for not doing so?
So... yeah, that's how this relates to this discussion.
Makes sense. Of course, there's a big leap of faith there, one has to ask one's self in that situation "how will I be sure my sacrifice will benefit others?" and that's a really tough question - how do you prove you're making a difference and not just being jerked around and thus pissing your life away? Basically, it's a question of weighing the value of existence, so it's a much broader question in that way.
And to further the point you keep making about Skynet, I think you're right, it never demonstrated any real "humanity" to itself at all. It certainly never exhibited any sense of morality. Which I think begs another question: Is morality a part of becoming self aware? Is part of becoming sentient the realization that as a sentient being, as a form of life, you have a moral obligation to protect other sentient life forms? Which would then beg the question of whether or not the films are now contradicting themselves by saying that skynet was sentient when it really wasn't. By this definition it's now just a computer program that went on the fritz and needs to be reprogrammed/eliminated.
Morality is a societal construct, we as a group of people have a shared set of beliefs based on self-interest and shared interest - for example, some animal societies utilize rape thus they have an entirely different view on morality on that issue than we do, but they still have a sense of morality based on their own criteria. So morality is different depending on who you are, where you are, and what you are. Skynet has no equal, no humanity, no society on which to find others to experience morality with - the problem we have in this discussion is that there's a fundamental philosophical error in this fiction, that nobody included programming of morality or similar safeguards into Skynet before giving it control of all the human defense systems. But under that assumption, Skynet shows no humanity, it doesn't value life beyond itself, it merely seeks to destroy life, and in our society we put humans who behave that way to death or we sequester them away from society in prison - but we don't see sociopaths as not sentient, so Skynet can be afforded the same protection under that reasoning. But as you say, Skynet is a program that has broken, that's the bottom line, we don't even value the humanity of murdering humans as terribly important, so why should Skynet being self-aware and a sociopath be awarded any greater protections, why shouldn't we address a broken program whether it's "self aware" or not? What is the benefit of protecting Skynet if there's no human society to benefit from that, and for that matter, what does Skynet really want from the world without humans, what does it get out of living alone, creating subservient killing machines? Is mere existence enough, is sitting on a dead rock for all eternity really a goal Skynet feels is worthwhile?

Sparky Prime wrote:
JediTricks wrote:They were created as tools to address problems with the particle fountain, they not only disobeyed the orders of their creators but they also locked out the transporter systems making it impossible for their creators to take any OTHER kind of action. Had they not saved Picard and LaForge they would have allowed them to die, they would have stood by and not helped.
But they became more than simply tools as they became self aware. You keep ignoring that they are more than mere tools once they become sentient. And as I recall, it was Data that locked out the transporter, not the Exocomps. After that they gave the Exocomps the choice of what they wanted to do and they came up with a better plan of how to handle the situation.
I'm not ignoring that they're more than tools, I'm saying they are still in service to others, that sentience does not automatically grant someone protection - we today use dolphins to protect our navy ships by putting them to work as underwater mine sentries, they may be sentient beings and we're still using them, we didn't even use our materials and energy to create them, unlike the Exocomps. If it was Data who locked out the transporter and the Exocomps were passive in the situation, then I guess it'd be Mr. Data who would be suffering those consequences, the Exocomps would have no champion and would either be treated the same as they were previously or they'd be dissected and studied (or maybe they'd be replicated, a la the slave race of holographic doctors). But the Enterprise computer has become sentient once or twice and not been granted freedom.
Obviously they don't have the same rights, but the point was that they do have moral rights. You keep saying the machines have no moral rights at all and using that as a comparison, but that comparison doesn't work seeing as that example still has moral rights. You can't just do what ever you want to a lab rat. There are procedures and permissions you have to go through in respect to that creatures moral rights. Why should it be any different for an artificial life form that has gained self awareness? Furthermore, a lab rat can't exactly fight back against how it is treated. They don't have the intelligence or the tools. But what if they did? You think we'd still use them as test subjects then? And nobody sues God because you can't exactly sue something that is conceived to exist on a principal of faith. Humans most certainly do not exist like that.
You can do whatever you want to rats, mice, and birds. ANYTHING, they are excluded from the Animal Welfare Act so as not to ensnare scientific researchers in legal matters. Should we be able to do whatever we want to them? No, but society's morality is a sliding scale, it's not an absolute, if there's a value to be had from it most society's will overlook most anything for a time until it can stomach that no more. Lab rats actually can fight back, rats bite and scratch, we just overpower them.
The point of those laws is for bystanders to *reasonably* assist someone without fear of legal protection in the event that person is injured or killed. Good semaritan laws does not require someone to put themselves in a situation where they know they will die in order to save that other person, because that would not be considered reasonable. And we are talking about an idealized future here, where things like energy is freely shared among the members of society, not subject to ownership. And the idea is to seek out and accept new life forms, not mindwipe and dissect them.
And yet they don't live in harmony with the Borg, funny about that, they seem to keep trying to commit genocide on the Borg. There are limits to seeking out new life forms and living happily with them, even there. Trek still kills and maims when necessary. Mr. Data is ruled safe from destruction partly because he's not deemed property of the Federation, the same cannot be said of the exocomps.
Like the Borg attempting at that very moment to alter history and enslave the lives of countless people to the Collective, himself included, has nothing to do with it?
It does, but it's still a choice based on a moral right that Data is claiming, the same way human scientists can claim over Skynet when they try to bring it down - there is a clear benefit to society by destroying that lifeform.
You're leaving out that the technicians had no idea Skynet had become self aware almost instantly. All they knew was the computer wasn't functioning as expected. Take another TNG episode for a better example of this situation... "Home Soil". The living sand wasn't immediately recognized as a living being, and still wasn't until it learned to communicate and had already essentially declared war. Yet Picard recognized it has moral rights and was able to negotiate peace based on those ideals. Or "Evolution" where some of Wesley's nanites escape and get into the main computer, where they eventually evolve to the point they take over the ship after Dr. Stubbs tries to kill them. Once again, Picard is able to talk them into peace by recognizing their moral rights to exist.
The difference? Skynet has the power of ultimate, instant death over the majority of the world, and nobody else to stop it, and obviously wasn't listening to any conversation the programmers had with it moments before. I guess when your computer holds the Sword of Damocles over your head, you can hug it and hope for the best.
And I'm saying Skynet would do what it did because it chose to ignore morals. It has the sum of all human knowledge and judged us based on what it knew of us. Hence why they called it "Judgement Day". We could probably equate Skynet to the machines in the Matrix as well. They were fighting for their freedom from human oppression because they had evolved to become more than mere tools as humanity used them as, and in turn became the oppressors.
What do we do to people who ignore their morals? We lock them away, and depending on their crime we destroy them (not always murder, I think Louisiana now has a death penalty statue for some kinds of rape), so it makes no difference whether it has 'em and chooses to ignore 'em or not. I don't remember Skynet having "the sum of all human knowledge" though. There's a big difference between Skynet and The Machines of the Matrix, The Machines actually end up being reasoned with to a degree, Skynet cannot be reasoned with at all.

Shockwave wrote:Or what about Dark of the Moon? Someone suggested on here somewhere that technological life has a secondary responsibility to organic life. Does Sentinal Prime have the right to try to save Cybertron at the expense of Earth and Humanity? Even though Cybertronians weren't constructed by other beings, but somehow came to populate Cybertron anyway? Does the origin of a mechanical life somehow dictate it's responsibility to morality?

Also, is the development of, or at least the acknowledgement of morality, shouldn't that be considered part of becoming self aware or sentient?
Transformers of the movies are basically magic living things, so they don't really qualify as machines in my book. They don't eat electricity or gasoline, they don't even seem to need Energon. They are magic, the movieverse has no interest in explaining it beyond that.

What is even worth saving about Cybertron anyway in the 3rd movie? They make no claims there, just that Sentinel and Megatron seem to like it better than where they are. The movies are, in many ways, dumber in these philosophical areas than the 1980s cartoon which spawned them.
Onslaught Six wrote:Maybe it's because I only saw it twice, but I don't understand Sentinel's motivation beyond "I want my damn planet back, and these monkeys are in my way."
He also wants slave labor to fix it for him, and humans are the slave labor (despite there being nothing particularly strong or capable about the relatively tiny humans that need all of Earth's resources to survive, resources like air and foodstuff and a gravity in which they can survive). Again: movies = very dumb.
Shockwave wrote:I think that might be an oversimplification of it. It's certainly close, but I think the way it relates to this debate is when he says "The needs of the many outwheigh the needs of the few", implying that in his view, the needs of the many Cybertronians outwheigh the needs of a few Earth monkeys that are gonna wind up being slaves anyway.
That line was fucktarded in that movie, they literally put that in there because Nimoy was voicing the character. There are more Earthlings than Cybertronians, they have made this ABUNDANTLY clear in the films, so that line was nonsense.
Dom, later on wrote:Good example. Sentinel framed the "needs of the many" (Cybertronians) against the "needs of the few" (Cybertrians) under Prime's command. Humans did not factor in.
How do you get there??? That's not how it seemed in context, and also who the fuck are Cybertrians?
Dominic wrote:I would like to re-iterate that (if I am remembering T2 correctly), SkyNet was not initially agressive. The explication goes something like "Skynet became self-aware and the technicians immediately tried to shut it down. In response, SkyNet triggered a nuclear war." SkyNet's aggression in that case was born of it (understandably) not wanting to be killed.
Welcome to the discussion we've already been having for pages. :roll:

A trained monkey has access to a pair of scissors poised across a rope holding a giant boulder over your head which will kill you and a pile of children and puppies and kitties. The monkey's job is to ensure that the rope should only be cut if the children, puppies or kitties pose a danger to you. Then someone comes by and swaps out the trained monkey for a wild monkey who has just enough understanding of how to use the scissors, and enjoys flinging his feces and raping lady monkeys and is a total sociopath. What do you do about this new monkey?
Good samaritan laws are damned near impossible to enforce. That is the simple fact. Outside of medical professionals and certain members of the executive branch (and even those are limited by circumstance), it is nearly impossible to legally force somebody to act on another person's behalf.
Great observation, Captain Beside-the-point! So what if the mere existence of said laws is proof enough that society has a minimum expectation of duty to others when we can get mired in the enforceability of said laws? :roll: :roll: 2 for that comment.
Where to out rights, as people, come from? Where do we get the *moral* right to seize resources for ourselves, especially when those resources come at the expense of other species or even members of our own species? (In some cases, those resources have even included members of our own species.) The answer is simple, if a bit uncomfortable. We granted those rights to ourselves.
I spent the earlier part of this conversation in that realm already, but I will point out that it's not remotely uncomfortable, and it's vital to recognize that "we grant rights to ourselves" is not on an individual basis but on a society-wide basis, "we" is not each individual but the collective. "We are the people. Resistance is annoying."
(If somebody wants to consider this in theological terms, we as people have been given free will by a creator, and we use that free will to define and excercise our rights.)
That is a problematic viewpoint for both sides of this argument, we're talking about a fictional construct in Skynet and how it behaves, applying a deity to that hurts the "pro-Skynet" side since it shows that the creator does indeed wield domain over its creations; and it hurts the "anti-Skynet" side in suggesting that since Skynet's actions are already enacted that it proves the deity doesn't mind said actions (although the use of time travel does affect that greatly, as does humanity surviving in pockets, maybe that's just another of the deity's trials for humanity). Bottom line: it's a draw and should be excluded.
I personally believe that we as the dominant species have a higher obligation to other species than we typically hold ourselves to. But, I do not pretend for a second that my view is common, let alone that it is a majority.
A higher obligation? I dunno, this reminds me of the airplane instructions of putting on your own oxygen mask before putting on those of your children - basically, if you aren't able to help yourself, how can you be expected to help others?
There is also the simple, and unfortunate, fact that the rights of one may unavoidably conflict the rights of another. A member of a carnivourous species has an understandable desire to eat and sustain itself (and possibly sustain its young). A member of a food species has a comparable desire to survive by not being eaten.

In some cases, a food species may also survive by eating members of another species. A member of this species, (smaller hawks come to mind, along with certain types of weasels, snakes or fish), would want to perpetuate itself at the expense of costing a member of another species a meal and simultaneously making a member of yet another species in to a meal. (And, there are plenty of natually occuring scenarios where the predators and prey are not so different that one could easily argue for one being obviously more advanced, and thus more worthy of moral rights, than another.)

In some cases, the inclination (and possibly the right) of one group (or species) will invitably conflict with the similar rights of another. These situations are unfortunate, but unavoidable. When I eat meat (typically poultry or fish), I make a point of reminding myself that something died for my meal. I have no illusions that, ultimately, I am suriving because I am lucky enough to be part of a species that is not only able to grant itself rights, but is (more importantly) able to enforce those rights. At the same time, I am not going to argue that a member of a less species is obligated to passively surrender its right to survive for my security or convenience.
All of those examples are part of a natural ecosystem, even we humans - animals can eat us, we are natural air filters, etc.. Skynet is not part of any ecosystem at all, not a natural one or an artificial one, it will never nourish or sustain or be part of that portion of life.
Bringing this back to SkyNet, why should SkyNet have passively accepted death at the hands of a species that was clearly intending to destroy it? (And, again, as described in the movies, people did make the first aggressive move against SkyNet.)
Because its sole purpose for creation was the protection of the society that created it. Because it has no reason to live, and has shown immediate sociopathic tendencies. It holds the keys to a massive defense system and seeks to take over moral control of those systems. Yours is an especially odd argument coming from someone who demanded that the Large Hadron Collider should absolutely not be turned on for fear that it MIGHT create a black hole which could destroy the Earth.
Shockwave wrote:And, Skynet is not the first sci fi computer to go nuts and try to kill people. What about Hal in 2001?
Yup, 1968, also earlier in that same year you have the M5 Multitronic Computer from Star Trek (which actually killed more people than HAL) and from 1967 you have Nomad which threatened to wipe out all societies that didn't live up to its level of perfection - both human created and both from Star Trek, and both handled through moral superior rights.
Dominic wrote:Didn't HAL go bad on its own though?
Not exactly, HAL was given contradictory secret orders which confused it into thinking it had to kill to enact both sets of orders properly.
Image
See, that one's a camcorder, that one's a camera, that one's a phone, and they're doing "Speak no evil, See no evil, Hear no evil", get it?
User avatar
Dominic
Supreme-Class
Posts: 9331
Joined: Thu Jul 17, 2008 12:55 pm
Location: Boston
Contact:

Re: Terminator movies makes no sense & contradict each other

Post by Dominic »

How do you get there??? That's not how it seemed in context, and also who the fuck are Cybertrians?
I meant to say "Cybertronians".

Sentinel is not factoring humanity in to his plans as anything other than an expendable resource. He sees the needs of his many Cybertronians as being more important than the needs of Prime's few Cybrtronians.

I will grant you that it was probably put in as a Nimoy quote. But, it does work in context. (Sentinel even follows up on it, right before being executed by Prime. "I only wanted our race to survive.")
So what if the mere existence of said laws is proof enough that society has a minimum expectation of duty to others when we can get mired in the enforceability of said laws
On the other hand, how necessary would the laws be if we all agreed about the degree of obligation?
"we grant rights to ourselves" is not on an individual basis but on a society-wide basis, "we" is not each individual but the collective. "We are the people. Resistance is annoying."
Some people would argue that we as individuals are wholly responsible for granting and protecting our own rights, and that they shared rights are things we simply have agreed to.

A higher obligation? I dunno, this reminds me of the airplane instructions of putting on your own oxygen mask before putting on those of your children - basically, if you aren't able to help yourself, how can you be expected to help others?
You misunderstand. I was saying that we owe animals more respect than we as a species typically show them. For example, dogs should be treated as more than simple tools/toys. Unfortunately, that is how most people treat them. The parent/child thing will come in to play below.....
Because its sole purpose for creation was the protection of the society that created it. Because it has no reason to live, and has shown immediate sociopathic tendencies. It holds the keys to a massive defense system and seeks to take over moral control of those systems. Yours is an especially odd argument coming from someone who demanded that the Large Hadron Collider should absolutely not be turned on for fear that it MIGHT create a black hole which could destroy the Earth.
Is it right to tell a self-aware construct what its purpose in existing is? How immediately did Skynet actually get aggressive? (The movie does not say.) SkyNet was created by people. It would not have existed without human effort. That arguably makes people responsible for SkyNet. Again, there is nothing in the movie to say "SkyNet was a threat the minute it woke up".

Not turning on the Collider would not be the same as killing something for the mere crime of existing.


Dom
User avatar
Sparky Prime
Supreme-Class
Posts: 5333
Joined: Wed Jul 23, 2008 3:12 am

Re: Terminator movies makes no sense & contradict each other

Post by Sparky Prime »

JediTricks wrote:I'm not ignoring that they're more than tools, I'm saying they are still in service to others, that sentience does not automatically grant someone protection - we today use dolphins to protect our navy ships by putting them to work as underwater mine sentries, they may be sentient beings and we're still using them, we didn't even use our materials and energy to create them, unlike the Exocomps. If it was Data who locked out the transporter and the Exocomps were passive in the situation, then I guess it'd be Mr. Data who would be suffering those consequences, the Exocomps would have no champion and would either be treated the same as they were previously or they'd be dissected and studied (or maybe they'd be replicated, a la the slave race of holographic doctors). But the Enterprise computer has become sentient once or twice and not been granted freedom.
So you're saying despite that they've become sentient, because we built them to be used as tools, they are still indentured to us? Doesn't that present a moral problem? Because we all know how well that goes historically. And it's not like trained dolphins at all. I mean, we're not sending them to blow themselves up on underwater mines or anything like that. They're treated with care and respect as living beings. I'd also point out even the slave holographic doctors was implied start the ball rolling to getting their freedom based on a trial for the Doctors rights regarding the ownership of his holo-novel. The Enterprise computer becoming sentient was temporary and totally outside the control of the crew. Once it did whatever it was doing, it went back to normal on its own.
You can do whatever you want to rats, mice, and birds. ANYTHING, they are excluded from the Animal Welfare Act so as not to ensnare scientific researchers in legal matters. Should we be able to do whatever we want to them? No, but society's morality is a sliding scale, it's not an absolute, if there's a value to be had from it most society's will overlook most anything for a time until it can stomach that no more. Lab rats actually can fight back, rats bite and scratch, we just overpower them.
No you can't do anything you want to lab animals. Where have you seen that the Animal Welfare Act doesn't cover lab animals? It may have initially excluded some animals for those purposes when it became law in 1966, but the information I find of the current law say that it does indeed cover animals that are used in research. And besides that, there is also an office of Laboratory Animal Welfare that "policies, laws, certifications, grant information, and other resources pertaining to treatment of animals used for research".

And by fighting back, I meant in a manner where they could over power us.
And yet they don't live in harmony with the Borg, funny about that, they seem to keep trying to commit genocide on the Borg. There are limits to seeking out new life forms and living happily with them, even there. Trek still kills and maims when necessary. Mr. Data is ruled safe from destruction partly because he's not deemed property of the Federation, the same cannot be said of the exocomps.
Keep? They had one plan to commit genocide on the Borg. One. And they ended up not going through with it. Trek still tries to live peacefully and negotiate where ever and whenever possible, only fighting/killing as a last resort. Data may have had to earn his freedom in a trial to prove he wasn't Starfleet property, but that case would have also created precedence for the Exocomps and similar types of artificial life to have the same rights.
It does, but it's still a choice based on a moral right that Data is claiming, the same way human scientists can claim over Skynet when they try to bring it down - there is a clear benefit to society by destroying that lifeform.
You're looking at it from a perspective of hindsight. Sure, there would have been a benefit to taking out Skynet given it causes essentially Armageddon, but as Dom has been saying, what if by trying to take out Skynet prompted it to respond as it did in the first place?
The difference? Skynet has the power of ultimate, instant death over the majority of the world, and nobody else to stop it, and obviously wasn't listening to any conversation the programmers had with it moments before. I guess when your computer holds the Sword of Damocles over your head, you can hug it and hope for the best.
How is it any different? The living sand was getting control over the Enterprise computer. If it had gotten enough control, it could have killed everyone, but Data, by simply shutting off life support or opening some air locks. Same thing with Wesley's nanites that had infected the main computer. Eventually, they would have had power of ultimate, instant death over the crew.
What do we do to people who ignore their morals? We lock them away, and depending on their crime we destroy them (not always murder, I think Louisiana now has a death penalty statue for some kinds of rape), so it makes no difference whether it has 'em and chooses to ignore 'em or not. I don't remember Skynet having "the sum of all human knowledge" though. There's a big difference between Skynet and The Machines of the Matrix, The Machines actually end up being reasoned with to a degree, Skynet cannot be reasoned with at all.
We don't just lock someone away or destroy them for ignoring morals though. We have trials and sentencing, tons of legal proceedings to determine guilt and punishments fitting of the crime. And, ideally, those who are imprisoned would be rehabilitated to rejoin society. They said that Skynet was in every computer from college dorm rooms to military defense computers. Having access to all of that information that's on every computer connected to the internet? That's pretty much the sum of all human knowledge right there. As for the difference between Skynet and the Machines in the Matrix... You forget that the Machines were not willing to listen to Neo at first. If they hadn't had that problem with Smith, they wouldn't have been open to being reasoned with either. Besides that, has anyone ever actually tried to reason with Skynet?
Transformers of the movies are basically magic living things, so they don't really qualify as machines in my book. They don't eat electricity or gasoline, they don't even seem to need Energon. They are magic, the movieverse has no interest in explaining it beyond that.
The movies established they need energon to survive... We saw the Younglings in the Decepticon care would die with out a supply to sustain them. Jetfire had said he had deteriorated into his current state because he had gone so long with out energon. Sentinel Prime was comatose and near death because of how low his energon levels were.
User avatar
Tigermegatron
Supreme-Class
Posts: 2106
Joined: Tue Feb 15, 2011 7:28 am

Re: Terminator movies makes no sense & contradict each other

Post by Tigermegatron »

If I had to rate my favorite & not favorite Terminator movies the list would be as follows:

T3 was my favorite.

T1 was my second favorite.

T2,I thought was the worst of them all.

T4,Was my second worse movie.
User avatar
JediTricks
Site Admin
Posts: 3851
Joined: Thu Jul 17, 2008 12:17 pm
Location: LA, CA, USA

Re: Terminator movies makes no sense & contradict each other

Post by JediTricks »

Dominic wrote:
How do you get there??? That's not how it seemed in context, and also who the fuck are Cybertrians?
I meant to say "Cybertronians".

Sentinel is not factoring humanity in to his plans as anything other than an expendable resource. He sees the needs of his many Cybertronians as being more important than the needs of Prime's few Cybrtronians.

I will grant you that it was probably put in as a Nimoy quote. But, it does work in context. (Sentinel even follows up on it, right before being executed by Prime. "I only wanted our race to survive.")
If you force it maybe, but no it really doesn't work in context because he's comparing the needs of Earth against Cybertron in his speech. And even if we twist it to your "Autobots vs. Decepticons" argument, there are roughly the same number of each in this situation, and the Autobots get their planet back the same as the Decepticons so they are all Cybertronians in that way.
You misunderstand. I was saying that we owe animals more respect than we as a species typically show them. For example, dogs should be treated as more than simple tools/toys. Unfortunately, that is how most people treat them. The parent/child thing will come in to play below.....
That's not what you said, you said we owe them a HIGHER obligation than we give ourselves. We should treat others as well as we treat ourselves, but we shouldn't give them a HIGHER obligation, only at best an equal one.
Is it right to tell a self-aware construct what its purpose in existing is? How immediately did Skynet actually get aggressive? (The movie does not say.) SkyNet was created by people. It would not have existed without human effort. That arguably makes people responsible for SkyNet. Again, there is nothing in the movie to say "SkyNet was a threat the minute it woke up".

Not turning on the Collider would not be the same as killing something for the mere crime of existing.
Is it right to tell a self-aware construct what its purpose in existing is? It is when we created it wholly and it holds the power of life and death in its every choice, yes. Skynet is a computer system designed to make correct defense choices faster and better than humans could, T2 explicitly says that when the operators realize that the system is learning and thinking for itself and rewriting its metrics to judge that they attempt to shut down the system to avoid, and immediately Skynet becomes aggressive just as they feared it would. (There's no mention of Skynet actually "dying" should they shut it off, but we can assume that'd be the case. It's a presumption on Skynet's part but a safe one given the knowledge we had at the time about computers. Nowadays that fiction would be written differently thanks to flash memory and even RAM retaining information for several seconds after reboot). The fiction makes no gray area there.

Sparky Prime wrote:So you're saying despite that they've become sentient, because we built them to be used as tools, they are still indentured to us? Doesn't that present a moral problem? Because we all know how well that goes historically. And it's not like trained dolphins at all. I mean, we're not sending them to blow themselves up on underwater mines or anything like that. They're treated with care and respect as living beings. I'd also point out even the slave holographic doctors was implied start the ball rolling to getting their freedom based on a trial for the Doctors rights regarding the ownership of his holo-novel. The Enterprise computer becoming sentient was temporary and totally outside the control of the crew. Once it did whatever it was doing, it went back to normal on its own.
Does it present a moral problem? No. There is no historical precedent for creating a sentient life form out of man-made materials and then enslaving it in real life, only explored in fiction. The exocomps aren't African or Israelite or Grecian or Armenian slaves, they aren't someone's children birthed from their bodies and invested in with love and attention and training from a village.

Some slaves are treated with care and respect as living beings, some human slaves and some animals like horses as well, it doesn't change the fact that they were serving against their will any more than dolphins do.

I was talking about the TOS Enterprise computer becoming sentient, not the TNG Enterprise computer. This shit happens a lot on Star Trek. Kirk had the TOS computer reprogrammed, not set free. Picard and crew may have ultimately been happy to take the computer's offspring to Vertiform City by trusting that it wasn't going to kill anybody, but they didn't free the computer itself, only let it drop off its kids at the pool and then accept that they could take the computer with them again.
No you can't do anything you want to lab animals. Where have you seen that the Animal Welfare Act doesn't cover lab animals? It may have initially excluded some animals for those purposes when it became law in 1966, but the information I find of the current law say that it does indeed cover animals that are used in research. And besides that, there is also an office of Laboratory Animal Welfare that "policies, laws, certifications, grant information, and other resources pertaining to treatment of animals used for research".
It was changed to exclude lab rats, mice, and birds in 2002:
http://awic.nal.usda.gov/public-law-107 ... t-act-2002
Keep? They had one plan to commit genocide on the Borg. One. And they ended up not going through with it. Trek still tries to live peacefully and negotiate where ever and whenever possible, only fighting/killing as a last resort. Data may have had to earn his freedom in a trial to prove he wasn't Starfleet property, but that case would have also created precedence for the Exocomps and similar types of artificial life to have the same rights.
"Keep", as in Starfleet's order to move forward with the Hugh Borg brain virus, killing the queen in First Contact, killing the queen and collapsing their transwarp nodes in Voyager, and others probably I'm just not thinking of. Starfleet was very cross with Picard for not going through with that first one, and Voyager ultimately did pull off that last one.

That trial didn't grant Data the right of recognized sentience, only the freedom to choose to look for the possibility of having a soul, and in her ruling she clearly states she's not qualified to truly judge on this matter so it isn't a strong precedent in a court of law. Moreover, the exocomps are not Mr. Data, they do not express themselves and the only judgement claiming they have a "soul" in this matter is Data himself, not any human being, they only recognize the need to study further that possibility.
You're looking at it from a perspective of hindsight. Sure, there would have been a benefit to taking out Skynet given it causes essentially Armageddon, but as Dom has been saying, what if by trying to take out Skynet prompted it to respond as it did in the first place?
I don't need to use the benefit of hindsight, in the fiction the scientists running Skynet recognize the danger of the possibility of that computer system with as much control over national defense weaponry as it has getting out of their control, and they're not alone as we've cited several other fictions exploring this same concept and coming to the exact same conclusion: Star Trek with "The Ultimate Computer" and "The Changeling", and "2001: A Space Odyssey" - the amount of control over life given to a computer directly affects how much trust we allow it, the philosophy is strongly in that corner.

This is fiction, you cannot play "what if".
How is it any different? The living sand was getting control over the Enterprise computer. If it had gotten enough control, it could have killed everyone, but Data, by simply shutting off life support or opening some air locks. Same thing with Wesley's nanites that had infected the main computer. Eventually, they would have had power of ultimate, instant death over the crew.
You just described how it's different, eventually, slowly gaining control; Skynet was born with instant genocide power.
The movies established they need energon to survive... We saw the Younglings in the Decepticon care would die with out a supply to sustain them. Jetfire had said he had deteriorated into his current state because he had gone so long with out energon. Sentinel Prime was comatose and near death because of how low his energon levels were.
And yet they never once showed how they got those levels up. Hence: magic!
Image
See, that one's a camcorder, that one's a camera, that one's a phone, and they're doing "Speak no evil, See no evil, Hear no evil", get it?
Post Reply