Shanks:
I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...Who cares? The discussion itself is the thing, not where it sits!
1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.Perhaps. But we are unable to create/recreate the operating system that makes us tick without simply making more humans. Bits and pieces of it, yes, but not the integrated whole. Researchers at places like MIT have been trying to get a grasp on it for years, yet if you read their papers you can see that they're really not getting very far. Machines can be made to reason following sets of rules (using the word "rule" very, very loosely), but they cannot use referents they don't have random access to, and they will not have had anything like the learning experience that even the least intelligent human has had. Therefore, they will never be able to think in the way that humans do. They may be able to think using a circumscribed set of experiences with which they are programmed. It may be possible to provide them with similar stimulation to that which babies have and "grow them up" gradually, although at an accelerated pace. Therefore, IMHO, their thought processes cannot be truly "human" in nature.
2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:
a. a deliberate product
b. one that whould not have occurred 'in the course of nature'
c. one created by humans
Even with all these in mind, a 'test tube' baby fits the bill.Your three points above are tautological in the context of my interest in AI. I'm not really interested in a philosophical debate over the term "artificial" and its connotations. A test-tube baby is just artificial tinkering with the beginnings of the act of "natural creation". No one (I hope) claims that test-tube babies are the product of some artificial construct. We don't determine their genetic makeup; that's predetermined. (This could, of course, change now, but it doesn't affect this dicussion). Test tube babes are as human as you or I (making some gross assumptions about how human you and I are) and are made from the same raw materials - egg and sperm - that we are.
What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.Technophobia? Not at all. In fact I considered a career in computer science for some time because I find the concept of creating a machine that can reason and have some form of self-awareness rather than merely follow its programming exciting rather than scary. I can enjoy The Matrix without worrying that it may be our future! I believe it can be done. And I still devour all the literature. I just don't see us succeeding anytime soon, for all sorts of reasons.
1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.That depends upon whether or not you believe the guesstimates of the capacity of the human brain. And guesstimates they are. I don't believe them - I think they're all grossly understated because I don't think we yet have a handle on how precisely low a level our brains actually "store" information. And I'm also in the camp that believes that we store not only information, but the processes that operate on that information, in the same memory "locations". Further, simply having the equivalent resources doesn't solve the basic technical problems.
2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.The reference to Eliza was a side-swipe at Turing's definition, not an argument for or against anything. Connectionism is definitely a mode which no one can argue with as an approach to modelling the human thought processes, but (without going into a load of detail which even I find boring these days) it is only part of the answer, since current connectionist models, in the end, depend upon probability ... do human thought processes? The other suggestions are also likely to be fruitful, but, again, it's being able to
put it all together which I'm betting against.
3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted)/No, no argument. Godel's theories are getting pretty long in the tooth anyway and attacking them is a bit like shooting fish in a barrel for our philosophical brethren. The development of machine-based "consciousness" and "self-awareness" are what I'm interested in, and are what I don't believe will occur in the foreseeable future. If a computer could demonstrate objectively that it is genuinely thinking about "What am I and who am I and what is my place in the scheme of things?" I would concede defeat. But I have no idea how you might prove or disprove whether it has been achieved or not.
And as a postscript to this (last) post on this topic from me, I would reiterate - this is only my opinion of the state of things based on the information I get. I could be completely wrong, and someone may have cracked it.
