I'm doing research for my English class research paper on public opinion on:

Started by spacemarine658, July 28, 2014, 11:38:09 PM

Previous topic - Next topic

ApexPredator

To what degree would you consider a computer artificially intelligent? Do you consider being self aware sufficient enough or at a human like capacity for thought/problem solving?

spacemarine658

my paper will go over that, but I believe that self aware is a huge part and the most important part. Once it becomes self aware everything else is just learning.

StorymasterQ

See, to me, self-aware and intelligent are different. It's the same difference as sentience vs sapience. You can be either or both, but you don't have to be either to be the other.

From Wikipedia, Sentience is the ability to feel, perceive, or to experience subjectivity. To me, this is self-awareness.
Also from Wikipedia, Sapience is the ability of an organism or entity to act with judgment. To me, this is intelligence.

To tie in with your questionnaire, I'd say that:
- If an entity is sapient, then it will need laws to impart limits to its judgment. Asimov's Three Laws is a good place to start.
- If an entity is sentient, then it should be given rights, as to impart limits to other entities' actions towards it. The Geneva Convention somehow springs to mind.
- If an entity is both, then both laws and rights are required of them.

Less seriously, it is certain that we have the capacity to make entities that can house Artificial Intelligence, but I doubt that anything can surpass the human mind in having Genuine Stupidity.
I like how this game can result in quotes that would be quite unnerving when said in public, out of context. - Myself

The dubious quotes list is now public. See it here

Mondkalb

I don't want to sound harsh, but your research paper isn't very international and some closer definitions would have been nice, because it isn't clear what's exactly ment with "artifical intelligence" and "self-awareness" or if you consider both to be the same. But for your age (if I guess it approximately correct) it's well done! :)

Quote from: StorymasterQto me, self-aware and intelligent are different. (...) You can be either or both, but you don't have to be either to be the other.
This is really interessting! Although my definition of self-awaress and intelligence are nearly (or effectively) the same as yours I come to an other conclution. I think a certain level of intelligence is needed to get self-aware.

Thinking of a typical experiment: a bonobo, a dolphin, a cat, a dog, a human baby and a human child get a coloured mark on each ones forehead, so they can only see that mark when looking into a mirror.

The bonobo and the dolphin (two of the most intelligent animals) will try to remove the mark after looking into the mirror and so will the human child do, too.

The cat, the dog and the human baby will NOT try to remove the mark. They don't realize that the "thing in the mirror" is a reflection of them self. The baby will learn it some day, when it has developed the needed level of intelligence.

This leads me to the conclusion that intelligence and self-awareness are on one hand not the same by definition, but on the other hand not possible to occur alone.

But this has (in my humble opinion) nothing to do with artificial intelligence at all! As "artificial" implies, it is no real intelligence. It's just based on algorithms which have been developed by some real intelligence to make it seem like an intelligent being without being aware of itself. Referring to the "forehead-mark-experiment": A artificial intelligence would remove the mark, but just because it's constructed to do so to create the illusion of being intelligent.

So referring to the research paper: I think when meshing "intelligence" and "self-awareness" the question should be if computers can develop real intelligence, not the artificial one.

Some fact to trigger inspiration:

1. The human brain is the most complex structure we know in the whole universe. When chaining all nerves of a single average human brain in a straight line, then you'll get a string from earth to the moon and half way back again!

2. There is no exact place inside the brain where memorys are stored. Memories are somehow spread over the whole brain.

3. The human is the only known life form so far, which is able to (at least partially) understand it's own brain.

(Whew! Guess I exaggerated it a bit... But I can't help, this is such an interesting theme! ;D )

spacemarine658

thanks for the opinon and advice guys and I'm 19 and a feshman I'm sorry if my paper isn't as broad as you'd like it is my first real research paper and its on opnions on A.I./Sentience/Sapience I apologize for not being more clear

StorymasterQ

Yeah, I guess we are probably a bit too pedantic about the subject :D Is it really for English class? Not even Science class? Then perhaps you don't really need to be correct, but instead be convincingly argumentative. Posit facts, give arguments for both sides, and either a decisive conclusion or an open question. I find both work on an English class paper.
I like how this game can result in quotes that would be quite unnerving when said in public, out of context. - Myself

The dubious quotes list is now public. See it here

ApexPredator

Space, I am in my last month of college (finally) and will drop a few nuggets that I learned on the research front. There is nothing wrong with having a targeted research paper; it is actually preferable in most cases. My main issue with your OP is in the survey questions themselves. If you are going to do a survey on AI and sentience (or anything really) than you need to properly define the terms for those that take the survey. You should never assume that your reader knows what you are talking about in a research paper because in most cases they have not done the research and that is why they are reading your paper. Also, the survey questions were worded in a way that would prompt a complex answer but we were supplied with options of yes/no/idk.
For example, your second to last question asks “if a computer did become self-aware would it be guaranteed rights…”. The questions that came up when I read that are: what do you consider self-aware, what kind of rights, same laws as whom, who is the guaranteeing authority of the rights? If this was a question that you gave to me in person (say we were both students in the same class) I would probably assume a similar meaning to what you were thinking when you wrote the question but it is still a good idea to never have a subject assume anything. On a side note I also found it interesting that you required race but not religions or political affiliation, any reason for this?