I couldn't agree more
|
Author | Content |
---|---|
Scott_Ruecker Mar 18, 2011 12:57 PM EDT |
I couldn't agree more, a computer that can actually make sense of what someone says to it and can then formulate a coherent response is incredible. Like you say, anyone who thinks that our form of communicating with each other is efficient is fooling themselves. Homo Sapien means "Man who talks", not "Man who talks well". We have created a computer that is capable of processing information and communicating its response inefficiently so as to be understood by us. But is it really thinking? Will it remember being on Jeopardy and kicking everyone's butt and be able to pull from that experience later on? What part, if any of its interactions does it remember when you turn it off? When a computer starts to understand the concept of past, present the future and its place in it then the true seed of sentience will have been born. |
tuxchick Mar 18, 2011 1:08 PM EDT |
Quoting:Homo Sapien means "Man who talks", not "Man who talks well". I die laughing, that is so apt and true! |
jdixon Mar 18, 2011 3:43 PM EDT |
> Homo Sapien means "Man who talks", not "Man who talks well" Actually, it means wise, knowing, or rational man (from Wikipedia and Webster's). The colloquial definition I've always heard is "thinking man". We get the word sapient from sapien. Considering how little wisdom we show as a species, I've always considered it a fairly poor choice myself. |
gus3 Mar 18, 2011 3:44 PM EDT |
Quoting:When a computer starts to understand the concept of past, present the future and its place in it then the true seed of sentience will have been born.Well, the parse error notwitstanding... It's a rare one indeed who knows one's place in the future. I'd say that isn't a prerequisite for sentience. |
JaseP Mar 18, 2011 4:02 PM EDT |
Technically, I believe the test is that if you are messaging back and forth between yourself and a computer & cannot distinguish it from a human response, it's considered sentient. Obviously, there's more to it than that. But, that's it in a nutshell. |
Scott_Ruecker Mar 18, 2011 4:31 PM EDT |
By 'place in it' I mean the idea of thinking of oneself in third person or Illeism, to have an opinion on yourself, to know that you..'Watson' are your own separate being from the rest of the universe. To remember what has happened in your life and be able to discern through cumulative life experiences the differences between 'you' then, and 'you' now..to have a perspective on your own existence. |
gus3 Mar 18, 2011 5:14 PM EDT |
Well, since you put it THAT way: I tried to write a short story a few years ago about emergent sentience, but I had to shelve it when I managed to paint my machine into a philosophical corner. Maybe it's time to dig it out and try again. |
jdixon Mar 18, 2011 5:23 PM EDT |
> Technically, I believe the test is that if you are messaging back and forth between yourself and a computer & cannot distinguish it from a human response, it's considered sentient. Actually, that means it's indistinguishable from sentience. Sentience is awareness. Awareness of your self and what surrounds you. We'll know a computer is sentient when it starts telling us it's depressed by the fact there's a newer computer out that's faster than it is. :) |
tracyanne Mar 18, 2011 5:59 PM EDT |
We'll know it's sentient when it says "I feel dirty I had Windows inside me" |
jdixon Mar 18, 2011 8:43 PM EDT |
That works too, TA. :) |
keithcu Mar 19, 2011 4:02 PM EDT |
Good stuff. I would just make a minor point that the problem isn't "human speech". That is a form of input. We shouldn't ignore it while trying to focus on the problem of understanding text. Whether it knows it is sentient or not is actually not important to whether it is useful. I don't care if a robot that cleans my house like poetry or cries or even remembers what it did yesterday. I also look forward to Watson's children. Note that in my opinion, the problem is a tools / social problem. I wrote about it on an article for lxer (http://keithcu.com/wordpress/?p=1691), but here are three paragraphs again. Regards, -Keith ---------- The most popular free computer vision codebase is OpenCV, but it is time-consuming to integrate because it defines an entire world in C++ down to the matrix class. Because C/C++ didn’t define a matrix, nor provide code, countless groups have created their own. It is easier to build your own computer vision library using standard classes that do math, I/O, and graphics, than to integrate OpenCV. Getting productive in that codebase is months of work and people want to see results before then. Building it is a chore, and they have lost users because of that. Progress in the OpenCV core is very slow because the barriers to entry are high. OpenCV has some machine learning code, but they would be better delegating that out to others. They are now doing CUDA optimizations they could get from elsewhere. They also have 3 Python wrappers and several other wrappers as well; many groups spend more time working on wrappers than the underlying code. Using the wrappers is fine if you only want to call the software, but if you want to improve the underlying code, then the programming environment instantly becomes radically different and more complicated. There is a team working on Strong AI called OpenCog, a C++ codebase created in 2001. They are evolving slowly as they do not have a constant stream of demos. They don’t consider their codebase is a small amount of world-changing ideas buried in engineering baggage like STL. Their GC language for small pieces is Scheme, an unpopular GC language in the FOSS community. Some in their group recommend Erlang. The OpenCog team looks at their core of C++, and over to OpenCV’s core of C++, and concludes the situation is fine. One of the biggest features of the ROS (Robot OS), according to its documentation, is a re-implementation of RPC in C++, not what robotics was missing. I’ve emailed various groups and all know of GC, but they are afraid of any decrease in performance, and they do not think they will ever save time. The transition from brooms to vacuum cleaners was disruptive, but we managed. C/C++ makes it harder to share code amongst disparate scientists than a GC language. It doesn’t matter if there are lots of XML parsers or RSS readers, but it does matter if we don’t have an official computer vision codebase. This is not against any codebase or language, only for free software lingua franca(s) in certain places to enable faster knowledge accumulation. Even language researchers can improve and create variants of a common language, and tools can output it from other domains like math. Agreeing on a standard still gives us an uncountably infinite number of things to disagree over. |
Posting in this forum is limited to members of the group: [ForumMods, SITEADMINS, MEMBERS.]
Becoming a member of LXer is easy and free. Join Us!