|
Joined: Mar 2000
Posts: 1,004
old hand
|
old hand
Joined: Mar 2000
Posts: 1,004 |
David
Wittgenstien's over-quoted line: "Whereof we cannot speak, thereof we must remain siltent" seems appropriate here.
Either one suggests that all language is nothing more than language, and attempting to consistently link any labels to real world referents is futile and meaningless, or one takes the (IMO, 'coherentist') view that language is yet another imperfect tool and we simply need to use it as best we can. In the first scenario, all discussions of consciousness are as meaningful as each other - the point is the discussions, not consciousness.
In the second scenario, we come up against the apparently huge divide between third person and first person apprehensions of mental states. Or, as one might say, coming back to a discussion that we have have had somewhere on the board before: what are qualia?
IMO, as far as qualia are 'real' things, non-human huamn-created entities will someday have them.
cheer
the sunshine warrior
|
|
|
|
Joined: Nov 2000
Posts: 3,439
Carpal Tunnel
|
Carpal Tunnel
Joined: Nov 2000
Posts: 3,439 |
non-human human-created entities may someday think but but I wonder if that will make "them" sentient? wow
|
|
|
|
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
|
OP
Pooh-Bah
Joined: Mar 2001
Posts: 2,379 |
IP: Machines have been cultivating empathy in us for a long time.
MAX: I wonder if you coud expand on this for me?
Strange but true: a lot of the first post was whimsy, meant to suggest certain things, but not to work them all out.
Shanks's post is on about the equivalence of ascribing consciousness "machines" and "non-machines." I answered that the term "consciousness" is problematic. If it is, it might pose a problem for the those who want to discuss AI as a form of consciousness. Obviously, I don't think it is one, or that it will ever become one. In fact, I don't think there is any such thing as a 'form of consciousness' for it to become
A more interesting question, then, is to consider the point at which machines might become members of an ethical community. I am suggesting that that would be the point at which we have empathy for them. This exceeds any connection with language, so I'm not going to pursue it here.
The line you're asking about is a quip: Once machines have entered into the ethical community, their empathy for us would be no less significant than ours for them. It is our empathy which will have recognized them as members of the ethical community. Once they are members of that community, their recognition of us will be no less important than ours of them.
In the line you ask about, the process is retroactive. The irony is that machines, which cannot yet be considered members of an ethical community are preparing us to admit them into it. This is farce, meant to point out the way we subordinate ourselves to machines. And to subordinate one's self is a hair's breadth from being subordinated by another. (shanks will call a category error).
Some machines that ravage are purchased under such burden of debt that they can never be shut down and operate as though with a purpose of their own. They subordinate our interests to theirs (as it were); in this way they individuate themselves this way over and against us: they demand to be recognized.
It's an old image. Wagner used it. The sentence is farce, but I think there's truth to it.
|
|
|
|
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
|
OP
Pooh-Bah
Joined: Mar 2001
Posts: 2,379 |
Bridget,
I think my reply to Max's post should clear all this up.
IP
|
|
|
|
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
|
OP
Pooh-Bah
Joined: Mar 2001
Posts: 2,379 |
Wittgenstien's over-quoted line: "Whereof we cannot speak, thereof we must remain siltent" seems appropriate here.So much so, I even misquoted it in my second (first?) post. I'm pretty much done with this topic for here and now. thanks, shanks  IP
|
|
|
|
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
|
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146 |
Shanks: I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...Who cares? The discussion itself is the thing, not where it sits! 1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.Perhaps. But we are unable to create/recreate the operating system that makes us tick without simply making more humans. Bits and pieces of it, yes, but not the integrated whole. Researchers at places like MIT have been trying to get a grasp on it for years, yet if you read their papers you can see that they're really not getting very far. Machines can be made to reason following sets of rules (using the word "rule" very, very loosely), but they cannot use referents they don't have random access to, and they will not have had anything like the learning experience that even the least intelligent human has had. Therefore, they will never be able to think in the way that humans do. They may be able to think using a circumscribed set of experiences with which they are programmed. It may be possible to provide them with similar stimulation to that which babies have and "grow them up" gradually, although at an accelerated pace. Therefore, IMHO, their thought processes cannot be truly "human" in nature. 2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:
a. a deliberate product b. one that whould not have occurred 'in the course of nature' c. one created by humans
Even with all these in mind, a 'test tube' baby fits the bill.Your three points above are tautological in the context of my interest in AI. I'm not really interested in a philosophical debate over the term "artificial" and its connotations. A test-tube baby is just artificial tinkering with the beginnings of the act of "natural creation". No one (I hope) claims that test-tube babies are the product of some artificial construct. We don't determine their genetic makeup; that's predetermined. (This could, of course, change now, but it doesn't affect this dicussion). Test tube babes are as human as you or I (making some gross assumptions about how human you and I are) and are made from the same raw materials - egg and sperm - that we are. What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.Technophobia? Not at all. In fact I considered a career in computer science for some time because I find the concept of creating a machine that can reason and have some form of self-awareness rather than merely follow its programming exciting rather than scary. I can enjoy The Matrix without worrying that it may be our future! I believe it can be done. And I still devour all the literature. I just don't see us succeeding anytime soon, for all sorts of reasons. 1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.That depends upon whether or not you believe the guesstimates of the capacity of the human brain. And guesstimates they are. I don't believe them - I think they're all grossly understated because I don't think we yet have a handle on how precisely low a level our brains actually "store" information. And I'm also in the camp that believes that we store not only information, but the processes that operate on that information, in the same memory "locations". Further, simply having the equivalent resources doesn't solve the basic technical problems. 2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.The reference to Eliza was a side-swipe at Turing's definition, not an argument for or against anything. Connectionism is definitely a mode which no one can argue with as an approach to modelling the human thought processes, but (without going into a load of detail which even I find boring these days) it is only part of the answer, since current connectionist models, in the end, depend upon probability ... do human thought processes? The other suggestions are also likely to be fruitful, but, again, it's being able to put it all together which I'm betting against. 3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted)/No, no argument. Godel's theories are getting pretty long in the tooth anyway and attacking them is a bit like shooting fish in a barrel for our philosophical brethren. The development of machine-based "consciousness" and "self-awareness" are what I'm interested in, and are what I don't believe will occur in the foreseeable future. If a computer could demonstrate objectively that it is genuinely thinking about "What am I and who am I and what is my place in the scheme of things?" I would concede defeat. But I have no idea how you might prove or disprove whether it has been achieved or not. And as a postscript to this (last) post on this topic from me, I would reiterate - this is only my opinion of the state of things based on the information I get. I could be completely wrong, and someone may have cracked it. 
The idiot also known as Capfka ...
|
|
|
|
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
|
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146 |
I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...Who cares? The discussion itself is the thing, not where it sits! 1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.Perhaps. But we are unable to create/recreate the operating system that makes us tick without simply making more humans. Bits and pieces of it, yes, but not the integrated whole. Researchers at places like MIT have been trying to get a grasp on it for years, yet if you read their papers you can see that they're really not getting very far. 2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:
a. a deliberate product b. one that whould not have occurred 'in the course of nature' c. one created by humans
Even with all these in mind, a 'test tube' baby fits the bill.Your three points above are tautological. And a test-tube baby is just artificial tinkering with the beginnings of the act of "natural creation". No one (I hope) claims that test-tube babies are the product of some artificial construct. They're as human as you or I (making some gross assumptions about how human you and I are) and are made from the same raw materials - egg and sperm that we are. What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.Not at all. In fact I considered a career in computer science for some time because I find the concept of creating a machine that can reason rather than merely follow its programming exciting. Financial constraints were the main reason why I decided against it. I believe it can be done. And I still devour all the literature. I just don't see us succeeding anytime soon, for all sorts of reasons. 1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.That depends upon whether or not you believe the guesstimates of the capacity of the human brain. And guesstimates they are. I don't believe them - I think they're all grossly understated because I don't think we yet have a handle on how precisely low a level our brains actually "store" information. And I'm also in the camp that believes that we store not only information, but the processes that operate on that information, in the same memory "locations". 2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.Connectionism is definitely a mode which no one can argue with as an approach to modelling the human thought processes, but (without going into a load of detail which even I find boring these days) it is only part of the answer, since connectionist models, in the end, depend upon probability ... The other suggestions are also likely to be fruitful, but, again, it's putting it all together which I'm betting against. 3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted)/No, no argument. "Consciousness" and "self-awareness" are what I'm interested in, and are what I don't believe will occur in the foreseeable future. But I have no idea how you might prove or disprove whether it has been achieved or not.
The idiot also known as Capfka ...
|
|
|
|
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
|
Carpal Tunnel
Joined: Aug 2000
Posts: 3,409 |
Of course, I'm almost always wrong when predicting information processing trends. So the first successful "Deep Thought" computer is probably being commissioned right now! I stumbled across this today, and thought that it was both interesting and relevant to this thread. http://www.newsobserver.com/monday/business/Story/419010p-414835c.html
|
|
|
|
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
|
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146 |
Self-programming gate arrays are certainly a promising field of study in adaptive computing. The prototypes were around ?ten years ago, and were hailed as the answer to all AI's problems by a few people. It's certain that there are all sorts of interesting problems they can be set to. Put enough of them together and provide them with the appropriate instruction set, and, who knows? - consciousness or at least self-awareness may be possible. Thanks for the link, Max. 
The idiot also known as Capfka ...
|
|
|
|
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
|
Carpal Tunnel
Joined: Aug 2000
Posts: 3,409 |
Thanks for the link, Max.
You're welcome, Dave. Dave, don't do that. I can't let you do that, Dave.
|
|
|
Forums16
Topics13,915
Posts230,127
Members9,198
|
Most Online4,270 Aug 30th, 2025
|
|
0 members (),
2,662
guests, and
2
robots. |
Key:
Admin,
Global Mod,
Mod
|
|
|
|