Wordsmith.org: the magic of words

Wordsmith Talk

About Us | What's New | Search | Site Map | Contact Us  

Previous Thread
Next Thread
Print Thread
Page 1 of 2 1 2
#25835 04/04/2001 2:34 AM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
Very soon, we will say machines think as we do, consciously. It won't be because we have any idea what consciousness is. But we will mean something when we say it. And that we will say it--and, finally, forget to say it--has a meaning of its own. Machines will think as we do when we recognize them doing so. We will not come to realize that machines think through analysis, machines will come to think when we have empathy for them. Machines have been cultivating empathy in us for a long time.

Last night, I watched part of a documentary on Jane Goodal; really, it was on chimpanzees. The documentary's strategy was simple. It showed the chimps behaving, apparently thinking, as humans do when we attribute thinking to them. It even showed that chimp and human intelligence develop at about the same rate. The chimps were adorable. It was easy to fall in love with them on television.

Next, the documentary showed hunters killing chimps for meat. To be sure, chimps are not the only bush meat. They are just part of an abundance of it. The documentary showed chimp arms being smoked over fires to prepare them for shipment to market. The documentary showed piles of chimp corpses. It showed a chimp baby being forced to play with its dead brother by its brother's killers. The documentary showed the faces of chimps: skin burned and shrunken back, exposing teeth; eyes cooked like egg whites in their sockets and filled with ash. It showed the carnage on tables in the market, in cooking pots, in peoples' mouths.

To permit a holocaust, remove empathy. Had the documentary not followed the strategy of first likening chimps to humans, the viewer would have been less likely to show concern for what is happening to them.

But the chimps' was not the only holocaust documented. Theirs took place within the larger one of the forest, and that one within the politics of civil war in Africa.

But what was striking about the documentary was its need for this strategy at all. It is, sadly, not strange that we can have no empathy for chimp, man, or forest. What is interesting is that reverence should require empathy at all.

It is possible that machines have already begun to think, and it is possible they have begun not to recognize us.





#25836 04/04/2001 8:50 AM
Joined: Mar 2000
Posts: 1,027
old hand
old hand
Joined: Mar 2000
Posts: 1,027
Dear IP,

I should be interested in knowing if this "documentary" left a lasting impression in your mind, and if yes, what sort of? Did it change any of your habits, ways of thinking, or attitudes? did you think it had an education value? BTW was it on a commercial channel?


#25837 04/04/2001 10:28 AM
Joined: Mar 2000
Posts: 11,613
Carpal Tunnel
Carpal Tunnel
Joined: Mar 2000
Posts: 11,613
Wsieber as usual punched right to the core: what was the point of the documentary? Well done. Insel., I am struggling a bit to "connect the dots" and come up with the whole picture you were drawing. Here is what I conclude,
and please correct me if I am wrong: We humans project our ways of thinking on to machines, so that even if they
don't "think as we do", if we believe they do, the results in terms of our behavior will be the same. And, like some science fiction that I have read, there may come a time when we believe that machines are more powerful than we are, and that they, not having any empathy for us, will either destroy us or that we will destroy ourselves before they can "get us"?

If the above is an accurate summarization, surely you jest.
However, your point about lack of empathy, and the need to build it, is very valid. If you are a typical human being,
you have very little empathy for a resident of a far galaxy.
But if he moved in next door and you got to know him, you probably would have a lot more. That's the way most of us are.

I would add, though, that I think a holocaust could occur even if the perpetrator(s) had empathy for their victims.
The perpetrator could think, "I'm really sorry that you have to be sacrificed, but it's for the greater good."


#25838 04/04/2001 11:02 AM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
What was the point of the documentary?

The point of the documentary wasn't my point.

(Without impuning wsieber in any way, if my point seems to have been the point of the documentary, then I haven't suceeded in making it.)

And, like some science fiction that I have read, there may come a time when we believe that machines are more powerful than we are, and that they, not having any empathy for us, will either destroy us or that we will destroy ourselves before they can "get us"?

I am writing about attributes of language and about ethics, not science fiction.

I would add, though, that I think a holocaust could occur even if the perpetrator(s) had empathy for their victims. The perpetrator could think, "I'm really sorry that you have to be sacrificed, but it's for the greater good.

Here I cannot agree. Look at Nazi propoganda. Consider words like "nip," "gook," "kike" and their use and impact. Even a version of "for the greater good," was used in Stalin's USSR to make individuals disposable.

I wrote a much more complete response to wsieber, which seems to have been lost to the ether.

wsieber: I may have sent it to you as a PM by accident. If so, would you either post it or send me a copy so that I can? Thanks. IP



#25839 04/04/2001 1:52 PM
Joined: Mar 2000
Posts: 11,613
Carpal Tunnel
Carpal Tunnel
Joined: Mar 2000
Posts: 11,613
if my point seems to have been the point of the documentary, then I haven't suceeded in making it.)
No, the point of the documentary did not seem to be the point of your post, to me.

I am writing about attributes of language and about ethics, not science fiction.
That is a big relief. Okay, language: it seems to me that you are saying when we ascribe the property of thinking to machines, it will be at that point that they begin to think?
In the way we imagine them thinking as we do, I mean? Oh dear, if I got your first paragraph right, you said it better than I am. Basically: machines are what they are, in actuality. But we imagine them to be more than that, when we start giving them attributes such as empathy.
Is that about right?

Of course language plays a huge role in ethics. We have had many discussions about political correctness here, which cover a good deal about ethics. I can't recall if any of them discussed language used as propaganda, though.

The perpetrator could think, "I'm really sorry that you have to be sacrificed, but it's for the greater good.

Here I cannot agree. Look at Nazi propoganda.
Please note: I said "could", not "did". I think it could happen.
----------------------------------------------------------

I had wondered if I could appropriate a usage of the word holocaust in a specific way, and after looking it up, realized I couldn't. But Atomica had something interesting:

"USAGE NOTE Holocaust has a secure place in the language when it refers to the massive destruction of humans by other humans. Ninety-nine percent of the Usage Panel accepts the use of holocaust in the phrase nuclear holocaust. Sixty percent of the Panel accepts the sentence As many as two million people may have died in the holocaust that followed the Khmer Rouge takeover in Cambodia. But because of its associations with genocide, people may object to extended applications of holocaust. When the word is used to refer to death brought about by natural causes, the percentage of the Panel accepting drops sharply. Only 31 percent of the Panel approves the sentence In East Africa five years of drought have brought about a holocaust in which millions have died. In a 1987 survey, just 11 percent approved the use of holocaust to summarize the effects of the AIDS epidemic. This suggests that other figurative usages such as the huge losses in the Savings and Loan holocaust may be viewed as overblown or in poor taste.•When capitalized Holocaust refers specifically to the destruction of Jews and other Europeans by the Nazis and may also encompass the Nazi persecution of Jews that preceded the outbreak of the war.

WORD HISTORY Totality of destruction has been central to the meaning of holocaust since it first appeared in Middle English in the 14th century, used in reference to the biblical sacrifice in which a male animal was wholly burnt on the altar in worship of God. Holocaust comes from Greek holokauston (“that which is completely burnt”), which was a translation of Hebrew ‘ōlâ (literally “that which goes up,” that is, in smoke). In this sense of “burnt sacrifice,” holocaust is still used in some versions of the Bible. In the 17th century the meaning of holocaust broadened to “something totally consumed by fire,” and the word eventually was applied to fires of extreme destructiveness. In the 20th century holocaust has taken on a variety of figurative meanings, summarizing the effects of war, rioting, storms, epidemic diseases, and even economic failures. Most of these usages arose after World War II, but it is unclear whether they permitted or resulted from the use of holocaust in reference to the mass murder of European Jews and others by the Nazis. This application of the word occurred as early as 1942, but the phrase the Holocaust did not become established until the late 1950s. Here it parallels and may have been influenced by another Hebrew word, šô’â (“catastrophe,” in English, Shoah). In the Bible šô’â has a range of meanings including “personal ruin or devastation” and “a wasteland or desert.” Šô’â was first used to refer to the Nazi slaughter of Jews in 1939, but the phrase haš-šô’â (“the catastrophe”) became established only after World War II. Holocaust has also been used to translate ḥurbān (“destruction”), another Hebrew word used to summarize the genocide of Jews by the Nazis.


--------------------------------------------------------------------------------
The American Heritage® Dictionary of the English Language, Fourth Edition Copyright © 2001 by Houghton Mifflin Company. All rights reserved.


--------------------------------------------------------------------------------





#25840 04/04/2001 1:52 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
Wittgenstein's later reflections on language can be summed up in the statement (a paraphrase) "Of the things of which we cannot speak, we should remain silent." His reflections comprise a group of thought experiments which partially delimit language.

Among these reflections is the question "When can we say a child has learned to read?" What he is investigating is the question itself. This question's answer may be described as belonging to its grammar.

The question seems to imply access to a "private experience" of the child's, something which is 'demonstrably' impossible. Rather, at a certain point we simply say, "Oh, the child has learned to read."

Not only can't we ascertain the child's "private state," we can't speak speak of a its 'private state' at all: because the expression is inadequate to its purported meaning, its meaning can't be expressed; and because its meaning can't be expressed, the expression a meaningless term. While we can say the words, they will never mean anything.

Wittgenstein goes through a whole series of such experiments and casts doubt on every expression whose meaning attaches to some 'private state.'*

What he cannot doubt is pain. If someone cries out in pain, I have no doubt of their experience. It is this phenomenon, which is not formally meaning, that I am calling "empathy." I am suggesting that it may lie at the heart of ethics: because we empathize with one another, we recognize one another as members of an ethical community
.
My observations on the documentary concern its strategy of manipulating empathy. How I feel about the ostensible subject of the documentary is completely irrelevant. I did not make this clear.

The question of a private state of a machine is virtually identical to that of the private state of a human being. The question whether they belong to an ethical community will be affirmed when we empathize with them. Whether or not we ever will empathize with them remains an open question. But our capacity to empathize with them is evidenced, among other places, in popular culture--including science fiction.

As a matter of logic, if machines become part of an ethical culture, just as there is no way to discuss the inner state of a child learning to read, there will be no way to differentiate between the ethical expressions of a human and those of a machine.

It was the illustration of this aspect of language I found fascinating in the documentary and which, I felt, made it relevant to the general topic of this board.

* use of scare quotes because, for reasons that are clear, the term is a tortured one.



#25841 04/04/2001 2:28 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
Basically: machines are what they are, in actuality. But we imagine them to be more than that, when we start giving them attributes such as empathy. Is that about right?

Not quite. I am saying that any concept, like "reality" (and its components, e.g., the possible thinking of machines) is delimited in "fact" by the capacity of language to express it.

Please note: I said "could", not "did". I think it could happen.

I understand. Since your proposition can't be tested, I can't deny it. I will say it goes against my deepest intuition.

Holocaust

I know I wouldn't use "holocaust" to describe a natural catastrophe. As to the exact number of dead, this is reminiscent of our recent discussion of "decimate" and, as percentage was there irrelevant, so is absolute number here.

I would suggest that a holocaust had to be intentional-just as bringing this sacrifice, the hagiga or "festival offering," was necessarily intentional if it was to count as an offering. Since intentionality is a "subjective state," it is worth noting, though noting that the 'presence of God in one's thoughts is at once the reason for the intentionally, and a possible "solution" to the kinds of problems I addressed in the Wittgenstein post.



#25842 04/04/2001 4:25 PM
Joined: Dec 2000
Posts: 1,055
old hand
old hand
Offline
Joined: Dec 2000
Posts: 1,055
> ... is delimited in "fact" by the capacity of language to express it.

Indeed, all forms of science and knowledge must come before the court of language and be judged.


"And the Fool on the Hill sees the Sun going down,
and the eyes in his head see the World turning round." -- Beatles

Mr. Inside-Out


#25843 04/04/2001 5:25 PM
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146
I think the use of "holocaust" has two parameters: One, a specific group must be responsible for the carnage and, two, a specific group must be targeted. I've never seen the word used as in "bubonic plague inflicted a holocaust on Europe between 1346 and 1350".

Also, for a plausible use of the word, I think that the act of destruction has to be quite deliberate. Here in Zild, a Maori politician given to disengaging her brain before opening her mouth last year accused the Europeans of "inflicting a holocaust" on the Maori during the middle of the 19th century. Her justification for this was arbitrary confiscation of land which drove Maori away from their traditional sources of food (as an unintended consequence); the introduction of measles, chicken pox and tuberculosis (endemic among the European population who arrived from elsehwere) the NZ Wars (which had an extremely low casualty rate) and the very occasional deliberate attempt to spread smallpox through contaminated goods.

Maori extremists and bleeding-heart liberal non-Maori agreed with her, but the majority of NZers saw it as grandstanding. While there was a steep decline in Maori population following the European arrival, there is absolutely no evidence to support the concept of a deliberate official policy on the part of European leaders to destroy the Maori. Ergo, no holocaust.

See also "post-colonial stress disorder" ...



The idiot also known as Capfka ...
#25844 04/04/2001 8:22 PM
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Aug 2000
Posts: 3,409
Maori extremists and bleeding-heart liberal non-Maori agreed with her, but the majority of NZers saw it as grandstanding.

Ah, Tariana. A friend of mine has a rather unflattering nickname for her, which I cannot recall, but is a clever pun on her name, and implies that she is not worth listening to. I found his viewpoint interesting bwecause he was one of the generation that were beaten by teachers if caught speaking Te Reo. Despite that, he does not consider himself the victim of an attempted Holocaust.


#25845 04/04/2001 10:53 PM
Joined: Oct 2000
Posts: 5,400
Carpal Tunnel
Carpal Tunnel
Joined: Oct 2000
Posts: 5,400
I have alot of doubts about AI (artifical inteligence)--the idea of truly thinking machines. I think that our drive to create these machines is almost a fools errand-- and almost modifier is important.

Alchemy is bunk-- but the work of the alchemist lead the word to a better understanding of chemisty-- and can we change lead into gold? No, but can it be done? Yes-- super nova, and other cosmic forces do it all the time! And we know and understand how, even if we don't yet do it.

AI --if its goal is to make a machine that thinks like a human is bound to fail-- but we humans, become enriched as we learn more and more about how we think-- and what we need to do to-- individually and collectively-- to think better.

In this country-- not to long ago it was "thought" okay to call men of color "boy"-- it is no longer thought acceptable. Have we changed everybody's heart and mind? no, but our change in language (and i don't like PC language-- i mean just generally acceptable language) is the start to changing all of our minds.

Where will our quest for AI lead us? to a more thoughtful society i hope-- but we moved from alchemy, to real chemisty, to physics-- and learned to split the atom-- to change matter in a way alchemist never thought-- and one that has the potential still to destroy our world. AI will bring many changes... i hope better ones than alchemy!


#25846 04/05/2001 1:02 AM
Joined: Jan 2001
Posts: 13,858
wwh Offline
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Jan 2001
Posts: 13,858
Dear of troy: You have a good point about the "spin-offs" that are likely to be produced by a long hard struggle to master artificial intelligence.
We all know some of the benefits that NASA's research has produced, and will continue to produce.


#25847 04/05/2001 2:38 AM
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Aug 2000
Posts: 3,409
Machines have been cultivating empathy in us for a long time.

I wonder if you coud expand on this for me? I missed it the first time I read your post, being distracted by reflections on the carnivorous, pack-hunting nature of chimpanzees, which reflections colour my own feelings of empathy toward those primates.




#25848 04/05/2001 2:57 AM
Joined: Mar 2000
Posts: 11,613
Carpal Tunnel
Carpal Tunnel
Joined: Mar 2000
Posts: 11,613
I am saying that any concept, like "reality" (and its components, e.g., the possible thinking of machines) is delimited in "fact" by the capacity of language to express it.

Oh yeah. Definitely. I was wishing today that there was a word that, when used, we would know would not mean a physical feeling, but another, internal-only, kind. Maybe two: one for emotion-feeling, and another for, um (speaking of inadequate lang.), lack of a better term I'll call mental feeling, as in, "I've got a feeling that will not work out". For instance, if I say I felt warm, without context no one can tell if I mean temperature-wise, or the warm feeling I get from a message that virtually radiates warmth.



#25849 04/05/2001 5:58 AM
Joined: Mar 2000
Posts: 1,027
old hand
old hand
Joined: Mar 2000
Posts: 1,027
If someone cries out in pain, I have no doubt of their experience. It is this phenomenon, which is not formally meaning, that I am calling "empathy."

Excuse me for taking this apart some more:
The first sentence implies that this is an unfailing, quasi-automatic reaction of you as a human being.
But the remainder of your argument suggests that you regret the very fact that empathy is not general, but has to be taught, and can be "manipulated". A quality which would be inborn/instinctive could not be considered part of ethics, because, as I see it, ethics is about conscious social behavior.

Now something rather provocative: Is there really a categorical difference between the first, purely emotional reaction on hearing someone cry out in pain on one hand, and witnessing a valuable object (like a brand-new car) going to pieces in a crash, on the other hand?
What I want to demonstrate is a warning not to stylize every pinching gut-feeling as empathy.


#25850 04/05/2001 7:34 AM
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146
Artificial intelligence (AI) has never been about developing a computer-based system which can literally "think", although that's been the popular perception, not helped by the generations of scifi writers who've written about androids (cf Data in Star Trek NG).

Very briefly, AI has really been about developing machines capable of logically reasoning within circumscribed boundaries about a limited range of relatively narrow subjects. And even then success has been slight. The chances of machines developing "empathy" within the foreseeable future appear to continue to be, um, (including zero?) none.

To be truly intelligent, computers would need to have the capability of forming genuine opinions (not just the results of programmed sequential logic) and being able to consistently assign qualitative values (feelings) to those opinions. Intelligence isn't pure reasoning. And empathy is not the outcome of pure reason.

Human reasoning is often only partially based on pure logic. Computers reason on the "if this then that else some other prescribed variable" principle. Humans reason on the "if this then maybe that, this or both, else perhaps something completely different arrived at by a very circuituous train of thought" principle. We tend to be more or less "intuitive" in our reasoning. The amount of processing power required to reproduce that using the current computer architectures is mind-boggling - and not available.

Until a different type of computer is developed which works on the same principles as the human brain, I doubt if any computer will ever come with in a bull's roar of true self-awareness ... empathy? Huh!



The idiot also known as Capfka ...
#25851 04/05/2001 7:55 AM
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Aug 2000
Posts: 3,409
Artificial intelligence (AI) has never been about developing a computer-based system which can literally "think", although that's been the popular perception, not helped by the generations of scifi writers who've written about androids (cf Data in Star Trek NG).

Until a different type of computer is developed which works on the same principles as the human brain, I doubt if any computer will ever come with in a bull's roar of true self-awareness ... empathy? Huh!


Thanks for that CapK, nice to hear from one who should know. It has also provided me with an excuse to use a signature line I filched from someone in Usenet.



"Your depression will be added to my own" -- Marvin of Borg



#25852 04/05/2001 10:42 AM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
Until a different type of computer is developed which works on the same principles as the human brain, I doubt if any computer will ever come with in a bull's roar of true self-awareness ... empathy? Huh!

Cap is refuting a point I never made. Empathy is, first of all, human. More later, gotta run.


#25853 04/05/2001 7:10 PM
Joined: Dec 2000
Posts: 2,661
Carpal Tunnel
Carpal Tunnel
Joined: Dec 2000
Posts: 2,661
...Intelligence isn't pure reasoning. Yet the arrogance of standardized testing is!

Very soon, we will say machines think as we do, consciously. The effort to make them do so is wasted time and an endeavor only taken seriously by those who are looking to become completely lazy, or isolated from the divergence that individual beings bring. It won't be because we have any idea what consciousness is. But we will mean something when we say it. And that we will say it--and, finally, forget to say it--has a meaning of its own. Meaning implied by only verbal existence (or lack thereof) is "full of holes" - "The lerprechan rode my blue unicorn" is understandable, but means little (aside from aesthetics), just as experienced posters (ones who post) who don't out of protest really aren't heard! Machines will think as we do when we recognize them doing so.You're starting to sound like a machine. We will not come to realize that machines think through analysis, machines will come to think when we have empathy for them. I'd be more inclined to say that that empathy will come when they have empathy for us (which will never happen). Machines have been cultivating empathy in us for a long time. Maybe in materialists in general, and certainly "consumers" whether they use the machines or not, but leave me out of "us".


#25854 04/05/2001 8:20 PM
Joined: Jun 2000
Posts: 444
addict
addict
Offline
Joined: Jun 2000
Posts: 444
It is possible that machines have already begun to think, and it is possible they have begun not to recognize us.

Sorry, I don't see at all how this (final and therefore important?) sentence relates to the rest of the post.

Nor can I interpret it clearly without knowing whether for you, Inselpeter, 'think' necessarily implies consciousness and what you mean by 'recognise'. Be able to identify as an object? Be able to identify as a living thing? Be able to identify as having valid needs and requiring some sort of respect?

BTW I read in flat mode and the other posts so far haven't helped.



#25855 04/05/2001 9:46 PM
Joined: Mar 2000
Posts: 11,613
Carpal Tunnel
Carpal Tunnel
Joined: Mar 2000
Posts: 11,613
[Out on a limb e] I think the point of insel.'s originating post is about how we use language which is inherently inadequate to ascribe, describe, attribute, etc.
things. I had the feeling that anything that meets the requirement could have been used as an example, not just machines or consciousness.


#25856 04/05/2001 10:26 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
I'm writing an answer to several of your posts (this time I can't just fire a quip).

In my first post, "strategy" is the operative term. The grisly images are part of the docu's strategy, the manipulation of our empathic response. The "AI" question is not central; empathy is. As to AI, the argument that has been attributed to me is not the one I'm attempting--successfully or not--to touch on.
IP



#25857 04/05/2001 10:40 PM
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146
Sorry, IP, I was actually responding to Helen's post more than yours, and perhaps not even that. Probably I should have made it a [rant].

I have suffered for years (ever since I wrote a dissertation on natural language processing for a post-grad diploma) from hearing those around me saying things like "machines will be able to think soon". That's codswallop.

They may well be able to process information in a way which appears to the casual observer to be based on human thought processes, but it won't be real. It'll all be clever programming which mimics the external manifestations of thought, that's all. I'm sure we've all heard of the "Eliza" program and the urban mythology surrounding it. And that's as close as we're likely to get for some time.

Of course, I'm almost always wrong when predicting information processing trends. So the first successful "Deep Thought" computer is probably being commissioned right now!



The idiot also known as Capfka ...
#25858 04/06/2001 5:03 AM
Joined: Mar 2000
Posts: 1,027
old hand
old hand
Joined: Mar 2000
Posts: 1,027
Starting from the assumption that the present thread is a representative example of human thought output, I am quite convinced a machine will never match it: by definition, machines are just not sufficiently chaotic.


#25859 04/06/2001 9:47 AM
Joined: Mar 2000
Posts: 1,004
old hand
old hand
Joined: Mar 2000
Posts: 1,004
I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...

1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.

2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:

a. a deliberate product
b. one that whould not have occurred 'in the course of nature'
c. one created by humans

Even with all these in mind, a 'test tube' baby fits the bill.

What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.

I deliberately used the word 'demonstrate' in that last sentence, because not one of us, apart from the personal example, has any idea of anybody else possessing consciousness except by an analysis of that person's behaviour. If you judge me to be conscious because of my behaviour, then you must judge as conscious any entity that shows similar behaviour - you cannot have it both ways and stick to some 'essentialist' idea of humanity - particularly given that we only have the words on our screens as evidence of any of the ayleurs' consciousness (though Jackie may get the dubious pleasure of meeting the android called Shanks shortly!).

The only other argument, then, must hinge upon some notion that it is practically impossible to design and programme a non-human entity with the capacity to demonstrate human-like behaviour. To this, some ripostes:

1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.

2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.

3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted).

That's my take, anyway.

the sunshine ("Campaign for silicon rights") warrior


#25860 04/06/2001 12:02 PM
Joined: Sep 2000
Posts: 4,757
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Sep 2000
Posts: 4,757
I'm in your camp on this one, Shanks. As to what our concepts of 'rights' will mean in that brave new world...

machines are just not sufficiently chaotic
Yes, Werner, but give Bill Gates another stab at it...


#25861 04/06/2001 12:28 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
If someone cries out in pain, I have no doubt of their experience. It is this phenomenon, which is not formally meaning, that I am calling "empathy"

Excuse me for taking this apart some more: The first sentence implies that this is an unfailing,

Point taken. It is not unfailing. Why and when it might fail is a question of some interest to, for example, psychologists and to those, as I have suggested, who might concern themselves with how holocausts come about, to name two.

quasi-automatic reaction of you as a human being. But the remainder of your argument suggests that you regret the very fact that empathy is not general, but has to be taught, and can be "manipulated".

Where do I say empathy has to be taught? That it can be manipulated is indisputable, as is its significance in human relations.

A quality which would be inborn/instinctive could not be considered part of ethics, because, as I see it, ethics is about conscious social behavior.

Can you advance anything other than egoism without it?

Now something rather provocative: Is there really a categorical difference between the first, purely emotional reaction on hearing someone cry out in pain on one hand, and witnessing a valuable object (like a brand-new car) going to pieces in a crash, on the other hand?

Obviously, there is. I don't feel empathy for cars, not even Ferraris.

What I want to demonstrate is a warning not to stylize every pinching gut-feeling as empathy.

a) are you assuming I do?

b) you are distinguishing the despair at a lost investment from empathy, for example, for the person pinned behind the wheel. Either we are already agree, or you have answered your own question.

IP


#25862 04/06/2001 12:56 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
IP: Very soon, we will say machines think as we do, consciously.

MK: The effort to make them do so is wasted time and an endeavor only taken seriously by those who are looking to become completely lazy, or isolated from the divergence that individual beings bring.


True or not, this is not on point.

IPIt won't be because we have any idea what consciousness is. But we will mean something when we say it. And that we will say it--and, finally, forget to say it--has a meaning of its own.

MK: Meaning implied by only verbal existence (or lack thereof) is "full of holes" - "The lerprechan rode my blue unicorn" is understandable, but means little (aside from aesthetics)


This is implicit in the statement you seek to refute: "It won't be because we have any idea what consciousness is"

IP: Machines will think as we do when we recognize them doing so.

MK: You're starting to sound like a machine.


I either deny this or take it as a compliment, depending on your meaning.

IP: We will not come to realize that machines think through analysis, machines will come to think when we have empathy for them.

MK: I'd be more inclined to say that that empathy will come when they have empathy for us (which will never happen).


Please see my response to Bridget. I will try to post it today.

IP: Machines have been cultivating empathy in us for a long time.

MK: Maybe in materialists in general, and certainly "consumers" whether they use the machines or not, but leave me out of "us".


Your concerns betray a familiarity with the phenomenon I am portraying with hyperbole. We are in agreement on this: your sentiments and my point coincide exactly.

IP



#25863 04/06/2001 1:27 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
If you judge me to be conscious because of my behaviour, then you must judge as conscious any entity that shows similar behaviour - you cannot have it both ways and stick to some 'essentialist' idea of humanity

This is the only post that addresses the point. I don't agree with your criteria for defining machine, but I won't take it up with you.

Since it is the only point with real relavance to language itself, I will add that the notion of consciousness is suspect for the very reason you assert observed behavior as the sole criterion for imputing it. As something utterly private, consciousness is not something around which there can be an evolved language. If the concept lies outside the horizon of language, both the term and its assertion are meaningless. This may spur controversy best avoided (I've learned my lesson). Suffice it to say, an analysis of certain forms of speech reveals an inadequacy of the term to its purported meaning:* we think we mean things which cannot be meant. That takes it a step past shank's point, but without changing direction.

IP

*[I-realize-I've-set-myself-up-for-a-dig emoticon]


#25864 04/06/2001 1:36 PM
Joined: Nov 2000
Posts: 3,439
wow Offline
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Nov 2000
Posts: 3,439
This little tale I am bout to recount is nowhere as erudite as the discussion so far but I sometimes have more courage than sense.

A science fiction story published in the 1960s told this story which I abridge for time/space.

Comes a time when all the computers in the world are to be linked. The final connection is set to happen in a big ceremony to be televised world-wide. After a huge discussion about who would make the physical connection and what would be the first question. It was decided the techs who did the work would make the physical connection. And they did. The first question asked by a committee of world leaders was : "Is there a God?" The computers whirled and blinked and then -- as the world listened -- the answer came : "There is now."
wow


Joined: Mar 2000
Posts: 1,004
old hand
old hand
Joined: Mar 2000
Posts: 1,004
David

Wittgenstien's over-quoted line: "Whereof we cannot speak, thereof we must remain siltent" seems appropriate here.

Either one suggests that all language is nothing more than language, and attempting to consistently link any labels to real world referents is futile and meaningless, or one takes the (IMO, 'coherentist') view that language is yet another imperfect tool and we simply need to use it as best we can. In the first scenario, all discussions of consciousness are as meaningful as each other - the point is the discussions, not consciousness.

In the second scenario, we come up against the apparently huge divide between third person and first person apprehensions of mental states. Or, as one might say, coming back to a discussion that we have have had somewhere on the board before: what are qualia?

IMO, as far as qualia are 'real' things, non-human huamn-created entities will someday have them.

cheer

the sunshine warrior


Joined: Nov 2000
Posts: 3,439
wow Offline
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Nov 2000
Posts: 3,439
non-human human-created entities may someday think but but I wonder if that will make "them" sentient?
wow




#25867 04/06/2001 2:34 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
IP: Machines have been cultivating empathy in us for a long time.

MAX: I wonder if you coud expand on this for me?


Strange but true: a lot of the first post was whimsy, meant to suggest certain things, but not to work them all out.

Shanks's post is on about the equivalence of ascribing consciousness "machines" and "non-machines." I answered that the term "consciousness" is problematic. If it is, it might pose a problem for the those who want to discuss AI as a form of consciousness. Obviously, I don't think it is one, or that it will ever become one. In fact, I don't think there is any such thing as a 'form of consciousness' for it to become

A more interesting question, then, is to consider the point at which machines might become members of an ethical community. I am suggesting that that would be the point at which we have empathy for them. This exceeds any connection with language, so I'm not going to pursue it here.

The line you're asking about is a quip: Once machines have entered into the ethical community, their empathy for us would be no less significant than ours for them. It is our empathy which will have recognized them as members of the ethical community. Once they are members of that community, their recognition of us will be no less important than ours of them.

In the line you ask about, the process is retroactive. The irony is that machines, which cannot yet be considered members of an ethical community are preparing us to admit them into it. This is farce, meant to point out the way we subordinate ourselves to machines. And to subordinate one's self is a hair's breadth from being subordinated by another. (shanks will call a category error).

Some machines that ravage are purchased under such burden of debt that they can never be shut down and operate as though with a purpose of their own. They subordinate our interests to theirs (as it were); in this way they individuate themselves this way over and against us: they demand to be recognized.

It's an old image. Wagner used it. The sentence is farce, but I think there's truth to it.



#25868 04/06/2001 2:35 PM
Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
Bridget,

I think my reply to Max's post should clear all this up.

IP


Joined: Mar 2001
Posts: 2,379
Pooh-Bah
Pooh-Bah
Offline
Joined: Mar 2001
Posts: 2,379
Wittgenstien's over-quoted line: "Whereof we cannot speak, thereof we must remain siltent" seems appropriate here.

So much so, I even misquoted it in my second (first?) post.

I'm pretty much done with this topic for here and now.

thanks, shanks

IP


#25870 04/08/2001 9:37 AM
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146
Shanks: I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...

Who cares? The discussion itself is the thing, not where it sits!

1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.

Perhaps. But we are unable to create/recreate the operating system that makes us tick without simply making more humans. Bits and pieces of it, yes, but not the integrated whole. Researchers at places like MIT have been trying to get a grasp on it for years, yet if you read their papers you can see that they're really not getting very far. Machines can be made to reason following sets of rules (using the word "rule" very, very loosely), but they cannot use referents they don't have random access to, and they will not have had anything like the learning experience that even the least intelligent human has had. Therefore, they will never be able to think in the way that humans do. They may be able to think using a circumscribed set of experiences with which they are programmed. It may be possible to provide them with similar stimulation to that which babies have and "grow them up" gradually, although at an accelerated pace. Therefore, IMHO, their thought processes cannot be truly "human" in nature.

2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:

a. a deliberate product
b. one that whould not have occurred 'in the course of nature'
c. one created by humans

Even with all these in mind, a 'test tube' baby fits the bill.


Your three points above are tautological in the context of my interest in AI. I'm not really interested in a philosophical debate over the term "artificial" and its connotations. A test-tube baby is just artificial tinkering with the beginnings of the act of "natural creation". No one (I hope) claims that test-tube babies are the product of some artificial construct. We don't determine their genetic makeup; that's predetermined. (This could, of course, change now, but it doesn't affect this dicussion). Test tube babes are as human as you or I (making some gross assumptions about how human you and I are) and are made from the same raw materials - egg and sperm - that we are.

What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.

Technophobia? Not at all. In fact I considered a career in computer science for some time because I find the concept of creating a machine that can reason and have some form of self-awareness rather than merely follow its programming exciting rather than scary. I can enjoy The Matrix without worrying that it may be our future! I believe it can be done. And I still devour all the literature. I just don't see us succeeding anytime soon, for all sorts of reasons.

1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.

That depends upon whether or not you believe the guesstimates of the capacity of the human brain. And guesstimates they are. I don't believe them - I think they're all grossly understated because I don't think we yet have a handle on how precisely low a level our brains actually "store" information. And I'm also in the camp that believes that we store not only information, but the processes that operate on that information, in the same memory "locations". Further, simply having the equivalent resources doesn't solve the basic technical problems.

2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.

The reference to Eliza was a side-swipe at Turing's definition, not an argument for or against anything. Connectionism is definitely a mode which no one can argue with as an approach to modelling the human thought processes, but (without going into a load of detail which even I find boring these days) it is only part of the answer, since current connectionist models, in the end, depend upon probability ... do human thought processes? The other suggestions are also likely to be fruitful, but, again, it's being able to put it all together which I'm betting against.

3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted)/

No, no argument. Godel's theories are getting pretty long in the tooth anyway and attacking them is a bit like shooting fish in a barrel for our philosophical brethren. The development of machine-based "consciousness" and "self-awareness" are what I'm interested in, and are what I don't believe will occur in the foreseeable future. If a computer could demonstrate objectively that it is genuinely thinking about "What am I and who am I and what is my place in the scheme of things?" I would concede defeat. But I have no idea how you might prove or disprove whether it has been achieved or not.

And as a postscript to this (last) post on this topic from me, I would reiterate - this is only my opinion of the state of things based on the information I get. I could be completely wrong, and someone may have cracked it.





The idiot also known as Capfka ...
#25871 04/08/2001 10:04 AM
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146
I don't know if this is off-thread, or if this entire thread is off-topic, or if it all makes sense somewhere, but...

Who cares? The discussion itself is the thing, not where it sits!

1. To speak of machines as never being able to think is probably meaningless - by almost any definition of the word, we humans are machines, and it is patently obvious that we think.

Perhaps. But we are unable to create/recreate the operating system that makes us tick without simply making more humans. Bits and pieces of it, yes, but not the integrated whole. Researchers at places like MIT have been trying to get a grasp on it for years, yet if you read their papers you can see that they're really not getting very far.

2. If you speak only of 'artificial' machines, you are still faced with the problem of defining what you mean by artificial. Let's say you claim it is:

a. a deliberate product
b. one that whould not have occurred 'in the course of nature'
c. one created by humans

Even with all these in mind, a 'test tube' baby fits the bill.


Your three points above are tautological. And a test-tube baby is just artificial tinkering with the beginnings of the act of "natural creation". No one (I hope) claims that test-tube babies are the product of some artificial construct. They're as human as you or I (making some gross assumptions about how human you and I are) and are made from the same raw materials - egg and sperm that we are.

What happens, IMO, in all this discussion of machines, is that same technophobia that Asimov tried to counter with his Three Laws of Robotics. We do not wish to believe that humans can create 'non living' (whatever that term means) entities that demonstrate consciousness.

Not at all. In fact I considered a career in computer science for some time because I find the concept of creating a machine that can reason rather than merely follow its programming exciting. Financial constraints were the main reason why I decided against it. I believe it can be done. And I still devour all the literature. I just don't see us succeeding anytime soon, for all sorts of reasons.

1. We already have computers with storage and processing capacity rivalling, and in fact beating, that of the human brain.

That depends upon whether or not you believe the guesstimates of the capacity of the human brain. And guesstimates they are. I don't believe them - I think they're all grossly understated because I don't think we yet have a handle on how precisely low a level our brains actually "store" information. And I'm also in the camp that believes that we store not only information, but the processes that operate on that information, in the same memory "locations".

2. With advances in the theory and practice of parallel processing, connectionism, and modular notions of mind (read Pinker, Dennett et al), it would be a brave person who would bet against the creation, within a generation or so, of a non-human entity with the capacity to do a darn sight more than Eliza.

Connectionism is definitely a mode which no one can argue with as an approach to modelling the human thought processes, but (without going into a load of detail which even I find boring these days) it is only part of the answer, since connectionist models, in the end, depend upon probability ... The other suggestions are also likely to be fruitful, but, again, it's putting it all together which I'm betting against.

3. The Godelian argument, as recently advanced by Penrose and others, is deeply flawed (discussion available privately, on another thread, or a different board altogether, if wanted)/

No, no argument. "Consciousness" and "self-awareness" are what I'm interested in, and are what I don't believe will occur in the foreseeable future. But I have no idea how you might prove or disprove whether it has been achieved or not.





The idiot also known as Capfka ...
#25872 04/11/2001 5:06 AM
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Aug 2000
Posts: 3,409
Of course, I'm almost always wrong when predicting information processing trends. So the first successful "Deep Thought" computer is probably being commissioned right now!

I stumbled across this today, and thought that it was both interesting and relevant to this thread.
http://www.newsobserver.com/monday/business/Story/419010p-414835c.html


#25873 04/11/2001 8:51 AM
Joined: Nov 2000
Posts: 3,146
Carpal Tunnel
Carpal Tunnel
Joined: Nov 2000
Posts: 3,146
Self-programming gate arrays are certainly a promising field of study in adaptive computing. The prototypes were around ?ten years ago, and were hailed as the answer to all AI's problems by a few people. It's certain that there are all sorts of interesting problems they can be set to. Put enough of them together and provide them with the appropriate instruction set, and, who knows? - consciousness or at least self-awareness may be possible.

Thanks for the link, Max.



The idiot also known as Capfka ...
#25874 04/11/2001 8:56 AM
Joined: Aug 2000
Posts: 3,409
Carpal Tunnel
Carpal Tunnel
Offline
Joined: Aug 2000
Posts: 3,409
Thanks for the link, Max.

You're welcome, Dave. Dave, don't do that. I can't let you do that, Dave.


Page 1 of 2 1 2

Moderated by  Jackie 

Link Copied to Clipboard
Disclaimer: Wordsmith.org is not responsible for views expressed on this site. Use of this forum is at your own risk and liability - you agree to hold Wordsmith.org and its associates harmless as a condition of using it.

Home | Today's Word | Yesterday's Word | Subscribe | FAQ | Archives | Search | Feedback
Wordsmith Talk | Wordsmith Chat

© 1994-2025 Wordsmith

Powered by UBB.threads™ PHP Forum Software 8.0.0