Monday, July 24, 2006

Can machines think

Here's a scene. you are in a room asking questions. You get replies. Following is the conversation:
u : What shall we talk about?
Reply : I can talk about anything. Ask me anything.

U: If I have a right angled triangle and two sides are 3 and 4 what is the third one>
R : Oh! we are into Phythogoras theorem. The answer is 5.

U: Good! you know some high school geometry. Can you tell me what is square root of 145.
R. Slightly greater than 12.

U : Thats really smart. What is your view of the recent war in Lebanon.
R. I think it is highly unfair. However, with different types of people inhabiting closely in a place, affiliations to different issues can cause human irritation to be high.

U : Vow!! Mind-boggling statement. Is it a good time to invest in stock market.
R: The market is the worlds biggest casino. If u are ready for risk, go ahead and invest.

U: Have you seen "Alaipayudhe Kanna".
R: Sorry, I am not an encyclopedia. Being in america i dont know what alaipayudhe means!


Well , the question is can you tell if the replies are from a computer or from a human being. based on comments we will proceed towards the actual issue, can computers think?
prof

8 comments:

BrainWaves said...

It is a technical possiblity, if not exists already. So, it could certainly be a computer.

I don't know whether this is to lead us to a philosophical or technical discussion though.

Suresh Sankaralingam said...

I would pose the question to the other side of the room to determine...:)

The word "thinking" is a relatively strong term. Computers can certainly be made to look like they think. But, when things are laid out in terms of how computer did what it did, it will do something deterministic based on a sequence of rules. I think human thinking is much more complicated than that. That said, often times, I am not sure if all humans can think?

Survivor said...

Hmmm... Time to think about computer thinking..I think( being a human) that computers are just machines that follow rules and can go bizzare at times ...but thinking like a human brain....doubt it..

Manohar said...

As far as the answers here are concerned- Like for example "Slightly greater than 12". Although most humans will answer that way- a computer can also be programmed to answer in approximations.

So I would say- with this set of questions/answers, not enough evidence to choose yet.

On an interesting note, the book, Metamagical Themas- Douglas Hofstadter, has a lot of discussion along these lines.

Survivor said...

For the question on Alaipayudhe Kanna, the computer can be programmed to answer in negative on a random basis , though it does sound human. If we keep on asking a few more questions, we can find it out..

bumblebee said...

Based on your line of questioning, I am inclined to say the replies are from a machine.

Can they "think" (think - Decide by reasoning) - Sure, computers can logically determine something based on the rules fed to it(Programmed into it). The better they are trained, the better they can "think". A big benefit they have is they can think objectively without emotional interruption.

Isn't it the same way we figure out the hypotenuse, based on what we were taught earlier? Surely, we did not invent Pythagorus theorem.

Suresh Sankaralingam said...

I do not completely agree that producing convincing responses to artbitrary stimuli can be categorized as thinking. For example, an arbitrary response of "No" or "Yes" can probably be a convincing response for most questions, but that doesnt signify any thinking.

Writing programs which creates such response is in someway related to how the programmer felt about the question. For example, if the programmer is a staunch American, I dont think the answer to the lebanon question would be the way it is... Can one say that computer is like a "Mini-Me" (replica of someone's thinking at a macro level of granularity) than actually something that thinks on its own...Does(nt) that matter?

Manohar said...

@prof:
Even if I agree with your answer, the point remains that the snippet of conversation above is not enough to conclude either way. So if the set of arbitrary stimulii is a large enough one (and wide enough) then the response will show the limitations of the programming.