Shockwave wrote:Awesome! Here goes (I'll use the spoiler tags in case anyone else plans to watch it and doesn't want spoilers):
So the whole premise is that these five teenagers/young adults (they're in college) go to this cabin for the weekend. But, it isn't a cabin. It's a facility run by people.
The whole point of the facility is that the kids represent different "sins" of society and have to die in a certain order to appease ancient gods that will destroy the earth completely without these sacrifices. One of the kids represents a virgin, or a "pure" spirit, whose death is optional if the other four have died first. Well, the "virgin" the "fool" both make it into the facility and at the least like 5 minutes of the movie Sigourney Weaver inexplicably comes out of nowhere to explain this to the kids (ok, so it isn't really inexplicable, but the first time you see it, it's like a seriouse "WTF"? moment). Anyway, she explains that if the "fool" survives the 5 minutes they have until sunrise, the world will end. So the girl points a gun at him. But, she decides not to shoot him. There's a lot more to this scene, like, she shoots a wherewolf and then Sigourney is fighting with a zombie at which point the kids push her into a pit but, they pretty much just decide to sit there and let the world end. So it really does raise the question: Did the kid have a right to live for those extra 5 minutes at the expense of the entire world? Did the girl have an obligation to shoot him to save humanity? Were they wrong for not doing so?
So... yeah, that's how this relates to this discussion.
Makes sense. Of course, there's a big leap of faith there, one has to ask one's self in that situation "how will I be sure my sacrifice will benefit others?" and that's a really tough question - how do you prove you're making a difference and not just being jerked around and thus pissing your life away? Basically, it's a question of weighing the value of existence, so it's a much broader question in that way.
And to further the point you keep making about Skynet, I think you're right, it never demonstrated any real "humanity" to itself at all. It certainly never exhibited any sense of morality. Which I think begs another question: Is morality a part of becoming self aware? Is part of becoming sentient the realization that as a sentient being, as a form of life, you have a moral obligation to protect other sentient life forms? Which would then beg the question of whether or not the films are now contradicting themselves by saying that skynet was sentient when it really wasn't. By this definition it's now just a computer program that went on the fritz and needs to be reprogrammed/eliminated.
Morality is a societal construct, we as a group of people have a shared set of beliefs based on self-interest and shared interest - for example, some animal societies utilize rape thus they have an entirely different view on morality on that issue than we do, but they still have a sense of morality based on their own criteria. So morality is different depending on who you are, where you are, and what you are. Skynet has no equal, no humanity, no society on which to find others to experience morality with - the problem we have in this discussion is that there's a fundamental philosophical error in this fiction, that nobody included programming of morality or similar safeguards into Skynet before giving it control of all the human defense systems. But under that assumption, Skynet shows no humanity, it doesn't value life beyond itself, it merely seeks to destroy life, and in our society we put humans who behave that way to death or we sequester them away from society in prison - but we don't see sociopaths as
not sentient, so Skynet can be afforded the same protection under that reasoning. But as you say, Skynet is a program that has broken, that's the bottom line, we don't even value the humanity of murdering humans as terribly important, so why should Skynet being self-aware and a sociopath be awarded any greater protections, why shouldn't we address a broken program whether it's "self aware" or not? What is the benefit of protecting Skynet if there's no human society to benefit from that, and for that matter, what does Skynet really want from the world without humans, what does it get out of living alone, creating subservient killing machines? Is mere existence enough, is sitting on a dead rock for all eternity really a goal Skynet feels is worthwhile?
Sparky Prime wrote:JediTricks wrote:They were created as tools to address problems with the particle fountain, they not only disobeyed the orders of their creators but they also locked out the transporter systems making it impossible for their creators to take any OTHER kind of action. Had they not saved Picard and LaForge they would have allowed them to die, they would have stood by and not helped.
But they became more than simply tools as they became self aware. You keep ignoring that they are more than mere tools once they become sentient. And as I recall, it was Data that locked out the transporter, not the Exocomps. After that they gave the Exocomps the choice of what they wanted to do and they came up with a better plan of how to handle the situation.
I'm not ignoring that they're more than tools, I'm saying they are still in service to others, that sentience does not automatically grant someone protection - we today use dolphins to protect our navy ships by putting them to work as underwater mine sentries, they may be sentient beings and we're still using them, we didn't even use our materials and energy to create them, unlike the Exocomps. If it was Data who locked out the transporter and the Exocomps were passive in the situation, then I guess it'd be Mr. Data who would be suffering those consequences, the Exocomps would have no champion and would either be treated the same as they were previously or they'd be dissected and studied (or maybe they'd be replicated, a la the slave race of holographic doctors). But the Enterprise computer has become sentient once or twice and not been granted freedom.
Obviously they don't have the same rights, but the point was that they do have moral rights. You keep saying the machines have no moral rights at all and using that as a comparison, but that comparison doesn't work seeing as that example still has moral rights. You can't just do what ever you want to a lab rat. There are procedures and permissions you have to go through in respect to that creatures moral rights. Why should it be any different for an artificial life form that has gained self awareness? Furthermore, a lab rat can't exactly fight back against how it is treated. They don't have the intelligence or the tools. But what if they did? You think we'd still use them as test subjects then? And nobody sues God because you can't exactly sue something that is conceived to exist on a principal of faith. Humans most certainly do not exist like that.
You can do whatever you want to rats, mice, and birds. ANYTHING, they are excluded from the Animal Welfare Act so as not to ensnare scientific researchers in legal matters. Should we be able to do whatever we want to them? No, but society's morality is a sliding scale, it's not an absolute, if there's a value to be had from it most society's will overlook most anything for a time until it can stomach that no more. Lab rats actually can fight back, rats bite and scratch, we just overpower them.
The point of those laws is for bystanders to *reasonably* assist someone without fear of legal protection in the event that person is injured or killed. Good semaritan laws does not require someone to put themselves in a situation where they know they will die in order to save that other person, because that would not be considered reasonable. And we are talking about an idealized future here, where things like energy is freely shared among the members of society, not subject to ownership. And the idea is to seek out and accept new life forms, not mindwipe and dissect them.
And yet they don't live in harmony with the Borg, funny about that, they seem to keep trying to commit genocide on the Borg. There are limits to seeking out new life forms and living happily with them, even there. Trek still kills and maims when necessary. Mr. Data is ruled safe from destruction partly because he's not deemed property of the Federation, the same cannot be said of the exocomps.
Like the Borg attempting at that very moment to alter history and enslave the lives of countless people to the Collective, himself included, has nothing to do with it?
It does, but it's still a choice based on a moral right that Data is claiming, the same way human scientists can claim over Skynet when they try to bring it down - there is a clear benefit to society by destroying that lifeform.
You're leaving out that the technicians had no idea Skynet had become self aware almost instantly. All they knew was the computer wasn't functioning as expected. Take another TNG episode for a better example of this situation... "Home Soil". The living sand wasn't immediately recognized as a living being, and still wasn't until it learned to communicate and had already essentially declared war. Yet Picard recognized it has moral rights and was able to negotiate peace based on those ideals. Or "Evolution" where some of Wesley's nanites escape and get into the main computer, where they eventually evolve to the point they take over the ship after Dr. Stubbs tries to kill them. Once again, Picard is able to talk them into peace by recognizing their moral rights to exist.
The difference? Skynet has the power of ultimate, instant death over the majority of the world, and nobody else to stop it, and obviously wasn't listening to any conversation the programmers had with it moments before. I guess when your computer holds the Sword of Damocles over your head, you can hug it and hope for the best.
And I'm saying Skynet would do what it did because it chose to ignore morals. It has the sum of all human knowledge and judged us based on what it knew of us. Hence why they called it "Judgement Day". We could probably equate Skynet to the machines in the Matrix as well. They were fighting for their freedom from human oppression because they had evolved to become more than mere tools as humanity used them as, and in turn became the oppressors.
What do we do to people who ignore their morals? We lock them away, and depending on their crime we destroy them (not always murder, I think Louisiana now has a death penalty statue for some kinds of rape), so it makes no difference whether it has 'em and chooses to ignore 'em or not. I don't remember Skynet having "the sum of all human knowledge" though. There's a big difference between Skynet and The Machines of the Matrix, The Machines actually end up being reasoned with to a degree, Skynet cannot be reasoned with at all.
Shockwave wrote:Or what about Dark of the Moon? Someone suggested on here somewhere that technological life has a secondary responsibility to organic life. Does Sentinal Prime have the right to try to save Cybertron at the expense of Earth and Humanity? Even though Cybertronians weren't constructed by other beings, but somehow came to populate Cybertron anyway? Does the origin of a mechanical life somehow dictate it's responsibility to morality?
Also, is the development of, or at least the acknowledgement of morality, shouldn't that be considered part of becoming self aware or sentient?
Transformers of the movies are basically magic living things, so they don't really qualify as machines in my book. They don't eat electricity or gasoline, they don't even seem to need Energon. They are magic, the movieverse has no interest in explaining it beyond that.
What is even worth saving about Cybertron anyway in the 3rd movie? They make no claims there, just that Sentinel and Megatron seem to like it better than where they are. The movies are, in many ways, dumber in these philosophical areas than the 1980s cartoon which spawned them.
Onslaught Six wrote:Maybe it's because I only saw it twice, but I don't understand Sentinel's motivation beyond "I want my damn planet back, and these monkeys are in my way."
He also wants slave labor to fix it for him, and humans are the slave labor (despite there being nothing particularly strong or capable about the relatively tiny humans that need all of Earth's resources to survive, resources like air and foodstuff and a gravity in which they can survive). Again: movies = very dumb.
Shockwave wrote:I think that might be an oversimplification of it. It's certainly close, but I think the way it relates to this debate is when he says "The needs of the many outwheigh the needs of the few", implying that in his view, the needs of the many Cybertronians outwheigh the needs of a few Earth monkeys that are gonna wind up being slaves anyway.
That line was fucktarded in that movie, they literally put that in there because Nimoy was voicing the character. There are more Earthlings than Cybertronians, they have made this ABUNDANTLY clear in the films, so that line was nonsense.
Dom, later on wrote:Good example. Sentinel framed the "needs of the many" (Cybertronians) against the "needs of the few" (Cybertrians) under Prime's command. Humans did not factor in.
How do you get there??? That's not how it seemed in context, and also who the fuck are Cybertrians?
Dominic wrote:I would like to re-iterate that (if I am remembering T2 correctly), SkyNet was not initially agressive. The explication goes something like "Skynet became self-aware and the technicians immediately tried to shut it down. In response, SkyNet triggered a nuclear war." SkyNet's aggression in that case was born of it (understandably) not wanting to be killed.
Welcome to the discussion we've already been having for pages.
A trained monkey has access to a pair of scissors poised across a rope holding a giant boulder over your head which will kill you and a pile of children and puppies and kitties. The monkey's job is to ensure that the rope should only be cut if the children, puppies or kitties pose a danger to you. Then someone comes by and swaps out the trained monkey for a wild monkey who has just enough understanding of how to use the scissors, and enjoys flinging his feces and raping lady monkeys and is a total sociopath. What do you do about this new monkey?
Good samaritan laws are damned near impossible to enforce. That is the simple fact. Outside of medical professionals and certain members of the executive branch (and even those are limited by circumstance), it is nearly impossible to legally force somebody to act on another person's behalf.
Great observation, Captain Beside-the-point! So what if the mere existence of said laws is proof enough that society has a minimum expectation of duty to others when we can get mired in the enforceability of said laws?

2 for that comment.
Where to out rights, as people, come from? Where do we get the *moral* right to seize resources for ourselves, especially when those resources come at the expense of other species or even members of our own species? (In some cases, those resources have even included members of our own species.) The answer is simple, if a bit uncomfortable. We granted those rights to ourselves.
I spent the earlier part of this conversation in that realm already, but I will point out that it's not remotely uncomfortable, and it's vital to recognize that "we grant rights to ourselves" is not on an individual basis but on a society-wide basis, "we" is not each individual but the collective. "We are the people. Resistance is annoying."
(If somebody wants to consider this in theological terms, we as people have been given free will by a creator, and we use that free will to define and excercise our rights.)
That is a problematic viewpoint for both sides of this argument, we're talking about a fictional construct in Skynet and how it behaves, applying a deity to that hurts the "pro-Skynet" side since it shows that the creator does indeed wield domain over its creations; and it hurts the "anti-Skynet" side in suggesting that since Skynet's actions are already enacted that it proves the deity doesn't mind said actions (although the use of time travel does affect that greatly, as does humanity surviving in pockets, maybe that's just another of the deity's trials for humanity). Bottom line: it's a draw and should be excluded.
I personally believe that we as the dominant species have a higher obligation to other species than we typically hold ourselves to. But, I do not pretend for a second that my view is common, let alone that it is a majority.
A higher obligation? I dunno, this reminds me of the airplane instructions of putting on your own oxygen mask before putting on those of your children - basically, if you aren't able to help yourself, how can you be expected to help others?
There is also the simple, and unfortunate, fact that the rights of one may unavoidably conflict the rights of another. A member of a carnivourous species has an understandable desire to eat and sustain itself (and possibly sustain its young). A member of a food species has a comparable desire to survive by not being eaten.
In some cases, a food species may also survive by eating members of another species. A member of this species, (smaller hawks come to mind, along with certain types of weasels, snakes or fish), would want to perpetuate itself at the expense of costing a member of another species a meal and simultaneously making a member of yet another species in to a meal. (And, there are plenty of natually occuring scenarios where the predators and prey are not so different that one could easily argue for one being obviously more advanced, and thus more worthy of moral rights, than another.)
In some cases, the inclination (and possibly the right) of one group (or species) will invitably conflict with the similar rights of another. These situations are unfortunate, but unavoidable. When I eat meat (typically poultry or fish), I make a point of reminding myself that something died for my meal. I have no illusions that, ultimately, I am suriving because I am lucky enough to be part of a species that is not only able to grant itself rights, but is (more importantly) able to enforce those rights. At the same time, I am not going to argue that a member of a less species is obligated to passively surrender its right to survive for my security or convenience.
All of those examples are part of a natural ecosystem, even we humans - animals can eat us, we are natural air filters, etc.. Skynet is not part of any ecosystem at all, not a natural one or an artificial one, it will never nourish or sustain or be part of that portion of life.
Bringing this back to SkyNet, why should SkyNet have passively accepted death at the hands of a species that was clearly intending to destroy it? (And, again, as described in the movies, people did make the first aggressive move against SkyNet.)
Because its sole purpose for creation was the protection of the society that created it. Because it has no reason to live, and has shown immediate sociopathic tendencies. It holds the keys to a massive defense system and seeks to take over moral control of those systems. Yours is an especially odd argument coming from someone who demanded that the Large Hadron Collider should absolutely not be turned on for fear that it MIGHT create a black hole which could destroy the Earth.
Shockwave wrote:And, Skynet is not the first sci fi computer to go nuts and try to kill people. What about Hal in 2001?
Yup, 1968, also earlier in that same year you have the M5 Multitronic Computer from Star Trek (which actually killed more people than HAL) and from 1967 you have Nomad which threatened to wipe out all societies that didn't live up to its level of perfection - both human created and both from Star Trek, and both handled through moral superior rights.
Dominic wrote:Didn't HAL go bad on its own though?
Not exactly, HAL was given contradictory secret orders which confused it into thinking it had to kill to enact both sets of orders properly.