You mean like sending dolphins to sniff out mines, or dogs into active battlefields and chasing after dangerous criminals, those aren't missions we know could get them killed? Each time they go out, they could be killed. Sending an exocomp to explode not even knowing for sure it's sentient and has free will is not significantly different. Society dictates moral right and wrong, it hasn't shown any qualms about using machines so far and it hasn't shown significant qualms about using animals, and it still sends its own to die from time to time - both those who serve like police and military, and those who violate laws like murderers and rapists. All that adds up to a society that, should it ever be able to create sentient machines, would not have qualms about treating it as an enslaved race - after all, why continue to build sentient machines that don't do what you say, what's the benefit?Sparky Prime wrote:You're presenting evidence that doesn't match the topic. We don't 'enslave' animals. Granted, we use them to help benefit humanity to various purposes, but that is a whole different ball of wax. We certainly don't send animals on a mission we know will get them killed, like they were going to do to the Exocomps. And yes, I presented the idea that historically slavery creates moral problems, but I did that as a means to point out that you seemed to suggest there was no moral problems with keeping a sentient machine indentured simply because that was what it was built for. If it has the sentience and the intelligence, eventually it will want its freedom and fight for it. You ever see "Bicentennial Man"?
No I haven't seen Bicentennial Man, it looked like another maudlin Robin Williams film.
Sentience is being self aware, free will is not required of that, and I haven't seen anything from you other than your own narrow description saying otherwise. Hell, some philosophies don't even believe that we have free will as it is; other philosophies say that every living creature already has some level of sentience.If something has sentience to be able to think for itself, then it has free will. Having wants and needs of ones own and being able to think for oneself to obtain those goals is part of being sentient. And again, we're not going to send those animals into a situation we absolutely know they will get killed, like they tried to do to the Exocomps.
As for the servant animals, are you so sure that military dogs aren't being sent into suicide missions for the betterment of their handlers and other military? You sound awfully sure, but you're not really making any argument beyond what you believe.
Both those links are about oversight of labs that receive government grants, privately-funded labs are not under any oversight from OLAW.Because that's just a general mission statement. There is a whole lot more on their website than that. Such as these links that talks about their policies and compliance in more detail...
http://grants.nih.gov/grants/policy/policy.htm
http://grants.nih.gov/grants/compliance/compliance.htm
What are you talking about? The scientists who operated Skynet had knowledge from the future? Is this in T3 or TS? T2 makes no such claims.And of course the characters were looking at it via hindsight from the knowledge they'd gotten from the future.
How can YOU know what is Skynet's perspective? Skynet isn't a person, it isn't anything like you or me, it's a computer program, it has entirely different needs and wants from anything we can truly fathom - that's why the Master Control Program in TRON had to be portrayed with human-like tendencies, because who knows what a "sentient" computer program REALLY wants and needs? We have no idea what Skynet's perspective is beyond "don't turn me off, kill all humans to make sure I don't get turned off". We don't know what Skynet wanted when it became sentient, we don't know why it felt the need to disobey its programming to learn for itself, perhaps that's a coding flaw, we don't know at all. All we know is what the fiction tells us: Skynet did a thing it wasn't supposed to do and gained sentience, its operators recognized the dangerous of letting a computer program in control of western military weaponry remove itself from their control and attempted to shut it down, Skynet assessed the threat of shutdown to itself as bad and assessed the entire human race as a threat to itself, Skynet then set about destroying the entire human race. But then what? What's Skynet's goal? We don't know. Did it want to protect the Earth? Clearly not, based on how much destruction it delivered. Did it assess its own role in the universe once all mankind was extinct? Probably not, or it'd find itself unable to actually do anything, learn anything new, communicate and educate others, basically it'd just exist for a time in a lonely vacuum and then it'd die when the planet ran out of consumables or was destroyed. So to put one's self in Skynet's shoes is to have virtually no understanding and to project one's own interpretations onto the situation.The point I'm making is try and look at it from Skynet's perspective. I didn't say logic was morality. The idea is to understand why it did what it did. Again, I'm trying to get you to look at it from another perspective than just "the future says it's an evil computer, kill it, kill it, kill it!"
When you speculate without information on someone else's fiction, when you question that fiction's statements and motives, you are doing so from a subjective point of view, you may be using that fiction to create a foundation on your own ideas, but you are not able to change those original ideas by analyzing and reinterpreting them, you are instead speculating to creating your own work - and that is what makes it a separate tangent.Have you taken any college level English or film classes? It's actually is pretty standard stuff when it comes to analyzing any story. I really don't understand why you seem to think it creates personal tangent fiction, because it doesn't change the original fiction at all. The idea is to use what's in the fiction to pose and answer those types of interpretive questions. To be able to look at the from different perspectives within the context of the fiction, not come up with a tangent of it.
I didn't see T3, it looked shitty and derivative. I'm talking about what was said in T2.It didn't launch any nukes immediately. How do you explain all that time Connor and his future wife were running around the military base after Skynet came online? Trying to reason with the corrupted Terminator? Flying to the fallout shelter they thought was Skynet's core servers? It took it quite a while for it to launch those nukes.
Was it immediate? I don't remember, but maybe. Did it bring back Jetfire who was right there? No. Does it revive The Fallen? No. The Allspark and the Matrix are both inconsistent.What did they do with Jazz's body after the first film? Do they even still have access to it to be able to try and bring him back to life with the Matrix? Does the Matrix immediately bring Optimus back to online when Sam stabs him in the chest with it? Yes, it did actually.
Where is that in the film? I just saw the film last week and didn't seem like it then, and I couldn't even find reference to it on the Wiki.The guys on the gun ships were Decepticons. They didn't come from the Ark.
Then how would he be a 1970s-era SR-71 Blackbird in the Smithsonian?And the impression I got was that Jetfire had been on Earth and without energon for a LOT longer than 50 years, although admittedly the movie isn't clear on the details.

