Sparky Prime wrote:It seems to me like you're trying to use such tangents to argue around a point rather than actually discuss it. I didn't miss anything with my rant, as you called it. I just don't see how only saying 'the page is indeed quite sanitized' really addresses the point I was making. I mean, you followed that up in the very same sentence with "and that's just... well, is it really a shame?". How is that merely an afterthought? You might as well have said "yeah, they changed it, so what?". And again you are misrepresenting my point by completely misrepresenting what I said. I haven't said that wiki is not a responsible encyclopedia source at all. I actually admire the lengths they go to ensure articles are as accurate as possible and personally I'd love it if wikipedia could be used as an acceptable site to cite information from. But again, it all comes down to the point that absolutely anybody can edit a wiki page that makes it an unreliable source to cite information from. Information on any given page can literally change from one minute to the next. And again, this example I only brought up to point out it can most certainly be more than one or two words we're talking about here. And certainly the outcome of a conflict over a page can lead to some pretty drastic changes as well. We're not talking simply about objective and subjective facts here.
What does any of this have to do with the question of "is free will necessary for sentience" which is where this all started? It's terribly ironic you claim I'm trying to use side-tangents to avoid discussion when you are the one who has taken us down the rabbit-hole of whether wikis can be trusted sources, instead of addressing the SUBSTANCE of my point you questioned the source despite the content being solid AND only there in support of OTHER, NON-WIKI MATERIAL. I'll go back to page 8:
JediTricks wrote:And I haven't seen anything from your own descriptions that shows sentience doesn't go hand in hand with free will. But as you point out here, this is a deeply involved philosophical subject matter we're getting into that goes in many different directions. As such, I don't see that there is a strictly right or wrong answer no matter how you look at it.
Sentience: a sentient quality or state; feeling or sensation as distinguished from perception and thought; The quality or state of being sentient; consciousness. Sentient: responsive to or conscious of sense impressions; finely sensitive in perception or feeling. Wikipedia also has this, which delves into the concept: In the philosophy of consciousness, "sentience" can refer to the ability of any entity to have subjective perceptual experiences.
Free Will: voluntary choice or decision; freedom of humans to make choices that are not determined by prior causes or by divine intervention. From Wikipedia: Free will is the ability of agents to make choices unconstrained by certain factors. Factors of historical concern have included metaphysical constraints (for example, logical, nomological, or theological determinism), physical constraints (for example, chains or imprisonment), social constraints (for example, threat of punishment or censure, or structural constraints), and mental constraints (for example, compulsions or phobias, neurological disorders, or genetic predispositions).
Free will seems to require sentience, but nothing in sentience requires free will. I have seen nothing in this thread so far that has even remotely suggested otherwise.
Argue the substance. If you feel my secondary citings of Wikipedia are in flaw, that should only strengthen your ability to argue it if you're right. I don't think you're right, and I haven't seen anything yet suggesting that.
Dominic wrote:I never said that SkyNet's right to live superceded the right of people to live. I just said that those two rights conflicted.
That is how I took your following quote "
I personally believe that we as the dominant species have a higher obligation to other species than we typically hold ourselves to."
I know later you said you meant only that we should treat other species better than we currently treat them, but that's not how it reads to me, you said that we have a higher obligation to others than we hold ourselves to.
As for "why"...
I am acknowledging that self-aware things have an interest in surviving. That right can be disputed. It can even be countered by force if one is so inclined to do that. But, there is no reason to expect anyone or anything to passively surrender their right to existence.
It's not surprising that it wants to live, but my question was why should it have moral rights to destroy all humans merely to protect itself. Is it really just "because it wants to live?" Is that enough for even a human, much less a man-made machine? Does killing 3 billion innocents just to keep yourself from catching a virus acceptable? Where is the cutoff with that logic, or is there one?
And, in some cultures, (even in modern times), parents are seen as having significant dominion over their off-spring for life. Even when that control is not legally enforcable, the social penalties for ignoring one's parents can be significant enough to be compelling.
The good thing about this "moral rights" arguments is that our society can look down our noses at those "savages". The bad thing is our society's kids have no morals and will kill us.
How self-aware is the vacuum cleaner though? The computer that I am typing this form has far more task based and calculation power than you or I ever will. But, it cannot self-direct. It can only do basic internal tasks on its own, without any proper deliberation Even the Roomba cannot, as far as we reasonably know, ponder what is is doing or contemplate alternatives.
I think the human brain has more computing power than any non-supercomputer machine still, and the potential maximum computing power of the human brain is vastly bigger than all the supercomputers combined:
http://www.wired.com/wiredscience/2011/ ... uter-data/
Anyway, my question is, why does "self-aware" matter in this scenario? Why does self direction make a difference to how we perceive Skynet or any artificially intelligent machine we create? If the spark of the Creator is what imbues all living creatures with value beyond a mere "thing", and the spark of the Creator is not found in Skynet, what makes Skynet different from a Roomba or a Ford Fusion or a Sonicare toothbrush from our moral perspective?
That would certainly be better than killing/destroying SkyNet/Moriarty. And, as much of a prison as a holdeck (or that cube drive Moriarty ended up in) may be, it would really be the only viable place for that life-form.
But that doesn't answer my question, "is it morally wrong to do so, are we limiting them to a worthless pretend life or are we freeing them to be whomever they want to be without the restrictions of their limited realities?" If the entire Terminator storyline turns out only to be happening in a holodeck cube drive that has imprisoned Skynet we put it in when we realized what was about to happen, aren't we essentially torturing Skynet based on your idea that it's a sentient being with a level of moral rights?
We create life for the purpose of convenience and recreation all the damned time. Ever seen deliberately hybridized spaniels/poodles? Ever seen a retreiver/poodle? A maltese/poodle? (Those are all purely recreational dogs.) We create germ lines of lab animals with traits favourable to various types of research. (Mice and rats are commonly bred to have weak immune systems, which makes disease/vaccine research easier.)
How would you define "responsible" in this case? The standard that researchers and regulators generally use in these cases is that the new species cannot harm the existing biosphere. But, we certainly are not treating those immunity gimped rodents with any respect. (And, the law does not require us to.)
Yes, we do need news laws and ethical standards. As it stands now, computers (like the chess-playing Deep Blue and the super search-engine Watson) are getting uncomfortably close to being "alive" and able to understand, rather than simply do, things.
[...]Again, computer programs are getting more sophisticated. At some point, we are going to have to consider the moral implications of shutting down computers, or of writing certain programs in the first place.
"Responsible" as in we find ourselves feeling required to give moral and ethical considerations to that new lifeform, to give legal rights and to nurture and feed, to take responsibility for any mistakes we make which harm it.
I'm not sure we have to give any considerations to an artificial intelligence, that's the philosophical question at hand, we are developing AI as a tool because we don't see it as a lifeform. That's essentially the core of this whole discussion we've been having, I suppose. What makes Deep Blue or Skynet worthy of new laws and ethical standards, why should we be viewing them as "alive", what moral rights do they have, that sort of thing. Is mere sophisticated computer intelligence a true lifeform if it has no actual physical feelings, is evolving code really "alive" at all?
I have seen the term "genocide" used to describe the killing of one species by another. Would you suggest a wholly different term? (I would not be opposed to a word like "taxonicide". I would simply want to moral implications of the act described to be considered.)
I haven't seen a real-world application of "genocide" used in a serious manner in any other way than humans killing humans.
"Taxonicide" might be good (or "taxacide", perhaps a little more efficient). But we don't have a word for this in part because we do it regularly, the oceans have 80% less species than they did a hundred years ago thanks to man's hand. We have erased countless numbers of other species due to hunting and environmental impact, and we as a society have barely begun to care.
We are each the only ones of our kind though, every murderer put to death is a unique self-aware thing, does it matter what species it belongs to?
People are individuals. But, they are also members of a species numbering in the billions. As a species, we are pretty common.
Explain why Skynet being rare makes it special in this case. It's software, we make software every day, it runs on computers, we not only make computers every day but we make them better every day than they were. So does it matter what species it belongs to, and if so, why?
Those are questions that we need to start considering, if not answering, in the near future.
Seems like we have some smart people right here trying to consider that. We're using science fiction as a jumping off point for those moral questions, just as a lot of science fiction seems to do. The more we talk about it and consider it from all sides, the more prepared we are to carry those ideas outside this forum to others in our daily lives and in a very small way affect the society we take part in.
That looks to be a fan wiki, which means that what they say means exactly bubkis unless they are referencing an official source. Their "grades" of being official seem to have to do with how late something was made relative to the originals.
I said as much and even the ARTICLE says as much, but with no other official word on the matter from the content creator there's not much to go on, so they went with the conservative interpretation of "canon" as based on the original author's hand (using the idea that the church is right and the Bible was written by the hand of God).
The whole point of a brand's IP is that it can be transferred like real property. In the case of "Terminator", those rights have apparently been fragmented. However, being made by someone other than the original owner or creator does not make more recent material any less official or legitimate.
Legitimacy is determined by the owner of a property.
I don't know that I like this, and much earlier I pointed out that this is an ethical sore spot that is tricky at best to accept. Just because you own the Mona Lisa now, does that mean you can paint its hair blond and claim that's what DaVinci always had in mind? And what of the works that The Terminator was found to be derided from, a pair of Harlan Ellison episodes of The Outer Limits? The then-owners of The Terminator settled and added an acknowledgement credit to the film, but James Cameron himself says otherwise. So does The Terminator spawn from 2 Outer Limits episodes? The "owner of the property" says it does because that owner had contractual rights to do so, yet the actual CREATOR of the property says no. Should there not be some real recognition to art that takes itself outside contractual business rights? I think so. At the very least, I think it allows it to be seen as less legitimate. What about Alien vs. Predator? What about The Crow sequels? Basic Instinct 2?