I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...

Who cares? The discussion itself is the thing, not where it sits!

1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.

Perhaps. But we are unable to create/recreate the operating system that makes us tick without simply making more humans. Bits and pieces of it, yes, but not the integrated whole. Researchers at places like MIT have been trying to get a grasp on it for years, yet if you read their papers you can see that they're really not getting very far.

2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:

a. a deliberate product
b. one that whould not have occurred 'in the course of nature'
c. one created by humans

Even with all these in mind, a 'test tube' baby fits the bill.


Your three points above are tautological. And a test-tube baby is just artificial tinkering with the beginnings of the act of "natural creation". No one (I hope) claims that test-tube babies are the product of some artificial construct. They're as human as you or I (making some gross assumptions about how human you and I are) and are made from the same raw materials - egg and sperm that we are.

What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.

Not at all. In fact I considered a career in computer science for some time because I find the concept of creating a machine that can reason rather than merely follow its programming exciting. Financial constraints were the main reason why I decided against it. I believe it can be done. And I still devour all the literature. I just don't see us succeeding anytime soon, for all sorts of reasons.

1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.

That depends upon whether or not you believe the guesstimates of the capacity of the human brain. And guesstimates they are. I don't believe them - I think they're all grossly understated because I don't think we yet have a handle on how precisely low a level our brains actually "store" information. And I'm also in the camp that believes that we store not only information, but the processes that operate on that information, in the same memory "locations".

2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.

Connectionism is definitely a mode which no one can argue with as an approach to modelling the human thought processes, but (without going into a load of detail which even I find boring these days) it is only part of the answer, since connectionist models, in the end, depend upon probability ... The other suggestions are also likely to be fruitful, but, again, it's putting it all together which I'm betting against.

3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted)/

No, no argument. "Consciousness" and "self-awareness" are what I'm interested in, and are what I don't believe will occur in the foreseeable future. But I have no idea how you might prove or disprove whether it has been achieved or not.





The idiot also known as Capfka ...