PDA

View Full Version : "AI is a dream we shouldn't be having"



Teallaura
06-05-2014, 10:32 AM
Robotics expert Noel Sharkey used to be a believer in artificial intelligence, but now thinks AI is a dangerous myth that could lead to a dystopian future of unintelligent, unfeeling robot carers and soldiers. Nic Fleming finds out

Source (http://www.computerweekly.com/feature/AI-is-a-dream-we-shouldnt-be-having)



I found this really interesting.

Carrikature
06-05-2014, 10:53 AM
I can't say I'm surprised. Most of the work on AI, even assuming a computational theory of mind, is currently too far behind the complexity needed to achieve sentience. The pursuit seems to be simulation, not emulation.

As far as AI wanting to take over the world, those ideas have a lot of elements that Sharkey seems to not be aware of. Some of it is an AI seeing humans as a threat and seeking to eliminate the threat. Another is an AI put in place to protect us and implementing protections we rail against.

I think there's a more important question for humans to think about when creating robots and AI to replace us. What will we be doing with our time when robots are performing the work?

Jedidiah
06-05-2014, 11:38 AM
The goal is definitely simulation. There is no actual explanation for sentience in existence.

KingsGambit
06-05-2014, 11:58 AM
I think there's a more important question for humans to think about when creating robots and AI to replace us. What will we be doing with our time when robots are performing the work?

This is a sobering thought. And yet... at least since the reforms at the beginning of the Industrial Revolution, it seems that technology doesn't always reduce the amount of work we do; it simply justifies us having even more on our plate now that we have the capacity to handle it.

Carrikature
06-05-2014, 12:04 PM
There is no actual explanation for sentience in existence.

That's not really accurate.

Sparko
06-05-2014, 12:10 PM
http://dilbert.com/dyn/str_strip/000000000/00000000/0000000/200000/20000/2000/700/222717/222717.strip.gif

Carrikature
06-05-2014, 12:23 PM
This is a sobering thought. And yet... at least since the reforms at the beginning of the Industrial Revolution, it seems that technology doesn't always reduce the amount of work we do; it simply justifies us having even more on our plate now that we have the capacity to handle it.

There are different kinds of technology, in my opinion, but it depends on how you look at it. I can send a mass email near instantly instead of hand-writing a dozen different letters and sending them to be carried on horseback or ship and waiting weeks or months for a response. Consider what used to be required to produce engineered drawings: hand-drawn with stencils on special paper that didn't let you erase well if at all. I would call that a massive reduction in the amount of work that I do.

You're right, though, that we've always found more to do. I'm not sure how well that holds when/if we develop the ability to do complete tasks with either no input or just programming. Imagine an automated, self-diagnosing entity that communicates with an automated overseer for repairs and deployment based on measurements of various conditions. I suspect there will always be people required, but the quantity needed drops significantly.

Teallaura
06-05-2014, 12:25 PM
http://dilbert.com/dyn/str_strip/000000000/00000000/0000000/200000/20000/2000/700/222717/222717.strip.gif
:hehe: I thought of that one, too...

Sparko
06-05-2014, 12:29 PM
There are different kinds of technology, in my opinion, but it depends on how you look at it. I can send a mass email near instantly instead of hand-writing a dozen different letters and sending them to be carried on horseback or ship and waiting weeks or months for a response. Consider what used to be required to produce engineered drawings: hand-drawn with stencils on special paper that didn't let you erase well if at all. I would call that a massive reduction in the amount of work that I do.

You're right, though, that we've always found more to do. I'm not sure how well that holds when/if we develop the ability to do complete tasks with either no input or just programming. Imagine an automated, self-diagnosing entity that communicates with an automated overseer for repairs and deployment based on measurements of various conditions. I suspect there will always be people required, but the quantity needed drops significantly.

I think you are confusing automation with artificial intelligence.

The OP article is saying that AI is really good at faking intelligence but when it comes down to it, it is just a brute force method of digging through an expert system database to come up with answers that sound like it is thinking. A parlor trick basically. We are not any closer to a true sentient artificial mind now than we were 100 years ago. Just better at faking it.

Teallaura
06-05-2014, 12:29 PM
There are different kinds of technology, in my opinion, but it depends on how you look at it. I can send a mass email near instantly instead of hand-writing a dozen different letters and sending them to be carried on horseback or ship and waiting weeks or months for a response. Consider what used to be required to produce engineered drawings: hand-drawn with stencils on special paper that didn't let you erase well if at all. I would call that a massive reduction in the amount of work that I do.

You're right, though, that we've always found more to do. I'm not sure how well that holds when/if we develop the ability to do complete tasks with either no input or just programming. Imagine an automated, self-diagnosing entity that communicates with an automated overseer for repairs and deployment based on measurements of various conditions. I suspect there will always be people required, but the quantity needed drops significantly.


Stencils could be corrected to a degree (I still like the smell of correction fluid. Reminds me of my Mom...) - but there was a reason draftsmen were well paid. Errors could easily destroy days of work on blueprints.

Sparko
06-05-2014, 12:36 PM
Stencils could be corrected to a degree (I still like the smell of correction fluid. Reminds me of my Mom...) - but there was a reason draftsmen were well paid. Errors could easily destroy days of work on blueprints.

I started out as an electrical draftsman, using a drafting machine (a square and ruler mounted to a drafting table) and templates.

In fact, my table looked almost exactly like this:
http://www.mikanet.com/museum/images/drafting_table.jpg

We drew on vellum which was fairly easy to erase mistakes on. When we were done, we copied the drawings onto blueprints (a type of copy) to make them permanent and create working drawings.

and no I wasn't well paid.

I even helped to bring in and set up Autocad when it first became available. It saved some time and allowed us to store pieces to be reused, and had built in symbols instead of having to use templates.

Teallaura
06-05-2014, 12:39 PM
I started out as an electrical draftsman, using a drafting machine (a square and ruler mounted to a drafting table) and templates.

In fact, my table looked almost exactly like this:
http://www.mikanet.com/museum/images/drafting_table.jpg

We drew on vellum which was fairly easy to erase mistakes on. When we were done, we copied the drawings onto blueprints (a type of copy) to make them permanent and create working drawings.

and no I wasn't well paid.

I even helped to bring in and set up Autocad when it first became available. It saved some time and allowed us to store pieces to be reused, and had built in symbols instead of having to use templates.


We had a mimeograph when I was a girl (my parents owned a kindergarten for awhile). I seem to recall draftsmen who dealt with stencils being well - or at least better - paid but I could be mistaken. I know stencils could be corrected but the corrections didn't really look good and didn't print nearly as well as non-corrected areas.

:shrug:

Carrikature
06-05-2014, 12:40 PM
I think you are confusing automation with artificial intelligence.

You would be wrong, then. I've spent enough time on the subject not to make that mistake. :smile:



The OP article is saying that AI is really good at faking intelligence but when it comes down to it, it is just a brute force method of digging through an expert system database to come up with answers that sound like it is thinking. A parlor trick basically. We are not any closer to a true sentient artificial mind now than we were 100 years ago. Just better at faking it.

I understood what the article is saying. I also know that it's an incomplete picture of what really is taking place these days. I've seen robots that are actively learning. Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters.

Carrikature
06-05-2014, 12:43 PM
Stencils could be corrected to a degree (I still like the smell of correction fluid. Reminds me of my Mom...) - but there was a reason draftsmen were well paid. Errors could easily destroy days of work on blueprints.

:yes:

I've heard lots of horror stories about thin paper ripping with too much pressure.

Teallaura
06-05-2014, 12:45 PM
:yes:

I've heard lots of horror stories about thin paper ripping with too much pressure.


Oh yeah - and 'too much' didn't amount to much at all. Then there were those horrible moments when the stencil caught on something and got ripped to shreds in the mimeograph...

:candle:

Carrikature
06-05-2014, 12:50 PM
Oh yeah - and 'too much' didn't amount to much at all. Then there were those horrible moments when the stencil caught on something and got ripped to shreds in the mimeograph...

:candle:

I luckily only have to hear the stories and never had to do it myself. :smug:

Jorge
06-05-2014, 01:59 PM
I found this really interesting.

Back in the 1970's, while in an Air Force Research lab, I was involved with Pattern Recognition (PR) and AI. Back then it was believed that "AI was not too far away - waiting on computers to advance some more." Almost 40 years later the "not too far away" has moved about 100 light years away.

For a short while - less than a year - I kind of thought along the same lines (AI will "soon" be real). That was ignorance on my part. It was while working on theoretical PR of handwriting and face recognition that I realized that the problem was many orders of magnitude beyond what I had thought. Shortly after that I adopted the position that I carry to this day : IT AIN'T GONNA HAPPEN - not ever!

We can and will get progressively closer by way of Expert Systems combined with super-super computers but true AI is impossible. Why? For a similar reason why a natural origin of life is impossible - there is more to life (and intelligence) than bringing together the right chemical elements (or electronic components).

Of course, I am well aware that the Materialists believe otherwise (it is part of their belief system) and that's why they'll continue reaching for their Holy Grail. I will bet the farm ten times over that true AI will never be come a reality (but a close facsimile will).

Jorge

Truthseeker
06-05-2014, 02:07 PM
A light year is a unit of distance measurement, not time. Some AI systems would not make that mistake. Superior in at least one way than Jorge.

Teallaura
06-05-2014, 03:01 PM
Using 'light year' as a metaphor for 'extremely far from goal' is perfectly acceptable in English vernacular.

Jorge
06-05-2014, 03:33 PM
A light year is a unit of distance measurement, not time. Some AI systems would not make that mistake. Superior in at least one way than Jorge.

Are you on crack? :yes:

You actually think I don't know what a light year is?
Here's a hint: I completed 6 years worth of undergraduate and graduate study in Physics.
I MEANT 100 light years in the sense that "it is very, very far away - unattainably far away".

You may now return to your 'pipe'. :lol:

Jorge

Jorge
06-05-2014, 03:36 PM
Using 'light year' as a metaphor for 'extremely far from goal' is perfectly acceptable in English vernacular.

I wrote my previous post before reading the above - thanks.
These people are so rabidly fanatical in their anti-YEC (or anti-Jorge)
stance that even the simplest things must be explained. To be pitied! :no:

Jorge

Jedidiah
06-05-2014, 04:40 PM
That's not really accurate.

There are suppositions.

Jedidiah
06-05-2014, 04:43 PM
I've seen robots that are actively learning. Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters.

Learning and memory are not sentience. You can lose the ability to remember and still be sentient.

Truthseeker
06-05-2014, 04:58 PM
Sorry, Jorge. I tend to think of possible future scientific advances (or evolution if you prefer) in terms of time away, not distance away. Makes more sense to me anyway. As for being anti-YEC, while I lean to the OE view, I still think it's possible that Earth, not necessarily the universe itself, is young in some ways. Who's the one who can say that it didn't happen the way God said in Genesis? We need more details than there are in it.

Epoetker
06-05-2014, 08:30 PM
No one who works daily with today's computers and data centers can truly believe in the Singularity.

To be fair, both the hardware and the software is just not structured to do it in anything resembling human-level decision-making. Some crazy guys (http://www.loper-os.org/?p=401) say that LISP machines (IIRC the only ones that did do machine learning properly and efficiently) may be the ones to look for, but programming in LISP is difficult and requires an understanding of how the underlying system works, which is way too hard to teach in a weeklong seminar, let along a TED talk. :wink:

Let it be known now-the worshipers of the Singularity will grown in inverse proportion to the actual ability of the underlying machines to bring it about.

Jorge
06-06-2014, 03:49 AM
Sorry, Jorge. I tend to think of possible future scientific advances (or evolution if you prefer) in terms of time away, not distance away. Makes more sense to me anyway. As for being anti-YEC, while I lean to the OE view, I still think it's possible that Earth, not necessarily the universe itself, is young in some ways. Who's the one who can say that it didn't happen the way God said in Genesis? We need more details than there are in it.

No problem ... I'll apologize for a bit of over-reaction. Do understand that certain folk here (they know who) have me on the 'edge' at all times and so sometimes, like nitroglycerin, the minutest 'bump' will set me off. :blush:

As for your stance regarding YEC v. OE/OU, keep in mind that the major conflict is theological, not Naturalistic/empirical. Stated another way, if OE/OU caused no theological conflicts in Orthodox Christianity, there probably wouldn't be a debate (more like a war!) over that subject. Most people aren't aware of that. Anyway, just food for thought.

Jorge

Carrikature
06-06-2014, 06:00 AM
There are suppositions.

There's a lot more than that. We're not modifying states of mind and behaviors on supposition alone. We're not initiating actions in mice (like stopping what they're doing and going to get a drink of water) based on supposition alone. We might still be lacking a complete explanation, but we've got a lot more than just supposition from which to draw.



Learning and memory are not sentience. You can lose the ability to remember and still be sentient.

I didn't say they were, Jed. Reread what I said, please, and this time pay attention to the two sentences that you cut out. Sparko's claim that "We are not any closer to a true sentient artificial mind now than we were 100 years ago." is downright false.

Sparko
06-06-2014, 06:58 AM
:yes:

I've heard lots of horror stories about thin paper ripping with too much pressure.

:yes: and can you imagine a paper cut from a D-size drawing? Why I have need architects cut their own heads off. Oh the horror!

Sparko
06-06-2014, 07:00 AM
There's a lot more than that. We're not modifying states of mind and behaviors on supposition alone. We're not initiating actions in mice (like stopping what they're doing and going to get a drink of water) based on supposition alone. We might still be lacking a complete explanation, but we've got a lot more than just supposition from which to draw.




I didn't say they were, Jed. Reread what I said, please, and this time pay attention to the two sentences that you cut out. Sparko's claim that "We are not any closer to a true sentient artificial mind now than we were 100 years ago." is downright false.

No it's not. Merely imitating thinking is not sentience, or self-awareness. We are no closer to that now than ever. Just because a computer can answer questions or perform tasks does not make it sentient. It doesn't initiate thoughts, it has no mind. It is not self-aware.

Carrikature
06-06-2014, 07:01 AM
:yes: and can you imagine a paper cut from a D-size drawing? Why I have need architects cut their own heads off. Oh the horror!

I still get those, though E1 are more common for me.

Sparko
06-06-2014, 07:08 AM
You would be wrong, then. I've spent enough time on the subject not to make that mistake. :smile:




I understood what the article is saying. I also know that it's an incomplete picture of what really is taking place these days. I've seen robots that are actively learning. Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters.

I worked with robots and PLCs, and even taught a few robots for use in manufacturing. That is not sentience. That is basic programming and data storage. Neither is pattern recognition any sort of sentience. Pattern recognition in computers is merely matching photos with stored images to find the closest one in the database. Computers are really good at that sort of stuff, but it is not THINKING it is processing. And it is not sentience.

Carrikature
06-06-2014, 07:15 AM
No it's not. Merely imitating thinking is not sentience, or self-awareness. We are no closer to that now than ever. Just because a computer can answer questions or perform tasks does not make it sentient. It doesn't initiate thoughts, it has no mind. It is not self-aware.

A computer that can recognize speech and answer questions accordingly is a heck of a lot closer to sentience than one that requires punch cards to do basic math. An AI that possess pattern recognition and employs machine learning is a lot closer to sentience than one that can't deal with visual stimuli at all. It's as if you're looking at a marathon and claiming that passing the twelve mile mark is the same as passing the two mile mark, because neither one has reached the finished line yet.

Carrikature
06-06-2014, 07:17 AM
I worked with robots and PLCs, and even taught a few robots for use in manufacturing. That is not sentience. That is basic programming and data storage. Neither is pattern recognition any sort of sentience. Pattern recognition in computers is merely matching photos with stored images to find the closest one in the database. Computers are really good at that sort of stuff, but it is not THINKING it is processing. And it is not sentience.

I'm not claiming it IS sentience. Pay attention. You're comparing robotics to AI, which isn't the right way to do things. It's you, not me, who is looking at automation and proclaiming sentience as impossible. That's the same mistake the OP article is making. Robotics and AI are two very different fields. Yes, a robot can be given some form of AI, but they aren't synonymous. Working with robots and PLCs isn't going to give you a good feel for what modern systems are capable of, because modern AI systems are being developed in labs.

Sparko
06-06-2014, 07:20 AM
A computer that can recognize speech and answer questions accordingly is a heck of a lot closer to sentience than one that requires punch cards to do basic math.

No it is not. It is just better at imitating sentience. It is still just a program plugging away, processing input, breaking it down, looking up the answer and "printing it out" - just with speech instead of punch cards. Do you think Siri really understands what you are saying to it? That "she" thinks and reasons before answering your questions? She doesn't. She just has a really big search engine database and a good front end speech processor that breaks down the speech into search terms. No thinking involved. Just processing.


An AI that possess pattern recognition and employs machine learning is a lot closer to sentience than one that can't deal with visual stimuli at all. It's as if you're looking at a marathon and claiming that passing the twelve mile mark is the same as passing the two mile mark, because neither one has reached the finished line yet.

Define "sentience" for me I know we are talking about the same thing.

Carrikature
06-06-2014, 08:30 AM
Define "sentience" for me I know we are talking about the same thing.

Good question. Sentience, self-awareness and consciousness are often used interchangeably in layman discussions, though they're not necessarily the same thing. In artificial intelligence, sentience and self-awareness are generally treated as the same thing (though sentience isn't actually a stated goal). An AI has attained sentience/self-awareness if it can distinguish itself from its environment. Mind, that doesn't necessarily entail the ability to communicate meaningfully about its environment. There's a pretty strong consensus that non-human animals have consciousness for all that they may not pass the mirror test or be able to communicate with other species.

Sentience in philosophy of mind generally refers to possessing a subjective experience (qualia). Of course, there's no single definition for qualia. Many would claim that qualia entails sensations, feelings, emotions or something else entirely. While it's certainly true that all known cases of qualia entail emotions, it's not clear that emotions/feelings/whatever are specifically required for a subjective experience to be classified as qualia. There's some debate over that. I would deny that emotions are required. 'Subjective experience' requires a multi-level perception of events particular to a specific individual. A better definition, in my opinion, is qualia as subjective sensation where 'sensation' refers somewhat obviously to the five senses: taste, touch, smell, vision, hearing. Your vision is different than mine, and therefore we have different qualia. That sensation may invoke emotion, but emotion is not part of the sensation itself. That sensation could also invoke memory, something machines would easily be capable of. Used in this way, there's no reason that artificial systems couldn't attain sensory experiences. However, there are very few cases that I've encountered of people actually working towards an integrated system like what would be required to achieve this. There's one, iCub, that's doing something similar (though it has a different focus).

Of course, 'sentience' and 'intelligence' are different things. Intelligence does require pattern recognition, memory, inference and the like. However, when the general public looks at robots and discusses artificial intelligence, they're looking for an eventual replication of human experience in a machine body. That's what they really mean when they say the machines have achieved sentience. To achieve that, machines have to reach human or super-human intelligence capabilities, and that's what most of artificial intelligence has been working toward (keyword: intelligence). We have made incredible strides in that arena. You say that Siri and its ilk aren't sentient, to which I say, "of course not!". They're not supposed to be sentient, and they're not anywhere close to sentient. They are, however, incredibly intelligent. You say that Siri "just has a really big search engine database and a good front end speech processor that breaks down the speech into search terms". Combine that with Google's "Did you mean" functions, and you have a pretty high level of intelligence.

Now, here's the answer to your question: sentience is a subjective sensory experience. To be a sensory experience, I maintain that it has to be experienced as a whole, not broken down into constituent parts. For example, visual stimulation at certain wavelengths and perceived (albeit slow) motion of foreign bodies are distinguishable parts. It's the unified whole that is the experience of seeing a sunset. Fitting pieces into a unified whole requires pattern recognition and linking (intelligence). You can't achieve sentience unless you can combine multiple aspects into a single whole, and you can't do that until you've achieved the necessary level of intelligence. That's why work on intelligence is also progress towards sentience. No, we don't have sentient systems, and we're a long way from creating them. Even so, we've made a lot of progress in that direction.

Sparko
06-06-2014, 08:59 AM
Good question. Sentience, self-awareness and consciousness are often used interchangeably in layman discussions, though they're not necessarily the same thing. In artificial intelligence, sentience and self-awareness are generally treated as the same thing (though sentience isn't actually a stated goal). An AI has attained sentience/self-awareness if it can distinguish itself from its environment. Mind, that doesn't necessarily entail the ability to communicate meaningfully about its environment. There's a pretty strong consensus that non-human animals have consciousness for all that they may not pass the mirror test or be able to communicate with other species.

Sentience in philosophy of mind generally refers to possessing a subjective experience (qualia). Of course, there's no single definition for qualia. Many would claim that qualia entails sensations, feelings, emotions or something else entirely. While it's certainly true that all known cases of qualia entail emotions, it's not clear that emotions/feelings/whatever are specifically required for a subjective experience to be classified as qualia. There's some debate over that. I would deny that emotions are required. 'Subjective experience' requires a multi-level perception of events particular to a specific individual. A better definition, in my opinion, is qualia as subjective sensation where 'sensation' refers somewhat obviously to the five senses: taste, touch, smell, vision, hearing. Your vision is different than mine, and therefore we have different qualia. That sensation may invoke emotion, but emotion is not part of the sensation itself. That sensation could also invoke memory, something machines would easily be capable of. Used in this way, there's no reason that artificial systems couldn't attain sensory experiences. However, there are very few cases that I've encountered of people actually working towards an integrated system like what would be required to achieve this. There's one, iCub, that's doing something similar (though it has a different focus).

Of course, 'sentience' and 'intelligence' are different things. Intelligence does require pattern recognition, memory, inference and the like. However, when the general public looks at robots and discusses artificial intelligence, they're looking for an eventual replication of human experience in a machine body. That's what they really mean when they say the machines have achieved sentience. To achieve that, machines have to reach human or super-human intelligence capabilities, and that's what most of artificial intelligence has been working toward (keyword: intelligence). We have made incredible strides in that arena. You say that Siri and its ilk aren't sentient, to which I say, "of course not!". They're not supposed to be sentient, and they're not anywhere close to sentient. They are, however, incredibly intelligent. You say that Siri "just has a really big search engine database and a good front end speech processor that breaks down the speech into search terms". Combine that with Google's "Did you mean" functions, and you have a pretty high level of intelligence.

Now, here's the answer to your question: sentience is a subjective sensory experience. To be a sensory experience, I maintain that it has to be experienced as a whole, not broken down into constituent parts. For example, visual stimulation at certain wavelengths and perceived (albeit slow) motion of foreign bodies are distinguishable parts. It's the unified whole that is the experience of seeing a sunset. Fitting pieces into a unified whole requires pattern recognition and linking (intelligence). You can't achieve sentience unless you can combine multiple aspects into a single whole, and you can't do that until you've achieved the necessary level of intelligence. That's why work on intelligence is also progress towards sentience. No, we don't have sentient systems, and we're a long way from creating them. Even so, we've made a lot of progress in that direction.


pretty long winded answer :wink:

The goal of AI, especially what everyone thinks of AI is to create a sentient self-aware computer/software based entity. If you merely want to define "intelligence" as being non-sentient, then yes we have made strides in that area. But we are nowhere close to creating a self-aware computer entity. One that has a subjective experience, can initiate thought, consider it's place in the world, think about its own future and be actually aware of other beings (us) - have a true "understanding" of what is going on around it.

It's all a shell game. I don't think we can ever program a self-aware being. I don't think we could ever even download a human consciousness into a software program and it be self aware. A human brain is just way too different from how a computer operates.

Roy
06-06-2014, 11:48 AM
pretty long winded answer :wink:

The goal of AI, especially what everyone thinks of AI is to create a sentient self-aware computer/software based entity.Um, no.

The goal of AI research is and has been generating systems or algorithms that can make optimal choices based on circumstances and events. This includes such things as stock-trading algorithms, plant control systems, driverless cars and so on. The goal of artificial consciousness research is to create self-aware systems. Not the same thing. The folks designing and building artificial intelligence systems for practical purposes don't want their systems to be self-aware. It would lead to distractions, unpredictability beyond the expected complexity, and lack of confidence in the finished system.

Who would want a driverless car that might wonder what it felt like to crash?

Roy

Sparko
06-06-2014, 11:52 AM
Um, no.

The goal of AI research is and has been generating systems or algorithms that can make optimal choices based on circumstances and events. This includes such things as stock-trading algorithms, plant control systems, driverless cars and so on. The goal of artificial consciousness research is to create self-aware systems. Not the same thing. The folks designing and building artificial intelligence systems for practical purposes don't want their systems to be self-aware. It would lead to distractions, unpredictability beyond the expected complexity, and lack of confidence in the finished system.

Who would want a driverless car that might wonder what it felt like to crash?

Roy

I am talking about in the context of the OP, Roy.

:ahem:

Carrikature
06-06-2014, 12:08 PM
pretty long winded answer :wink:

The goal of AI, especially what everyone thinks of AI is to create a sentient self-aware computer/software based entity. If you merely want to define "intelligence" as being non-sentient, then yes we have made strides in that area. But we are nowhere close to creating a self-aware computer entity. One that has a subjective experience, can initiate thought, consider it's place in the world, think about its own future and be actually aware of other beings (us) - have a true "understanding" of what is going on around it.

It's all a shell game. I don't think we can ever program a self-aware being. I don't think we could ever even download a human consciousness into a software program and it be self aware. A human brain is just way too different from how a computer operates.

It's a complicated subject. Your answer is quite a bit shorter, and it's pretty easy to see why. You're approaching this with the layman attitude that they're all the same thing. They're not. That's why you claim we've made no progress towards sentience. You're not examining what the parts and pieces are. When you do this, you concede that we've been making progress.

Carrikature
06-06-2014, 12:09 PM
Um, no.

The goal of AI research is and has been generating systems or algorithms that can make optimal choices based on circumstances and events. This includes such things as stock-trading algorithms, plant control systems, driverless cars and so on. The goal of artificial consciousness research is to create self-aware systems. Not the same thing. The folks designing and building artificial intelligence systems for practical purposes don't want their systems to be self-aware. It would lead to distractions, unpredictability beyond the expected complexity, and lack of confidence in the finished system.

Who would want a driverless car that might wonder what it felt like to crash?

Roy

Right, and I think that's one of the failings of the OP article. He's looking at artificial intelligence and trying to answer questions about artificial consciousness.

Sparko
06-06-2014, 12:12 PM
Right, and I think that's one of the failings of the OP article. He's looking at artificial intelligence and trying to answer questions about artificial consciousness.

What is your experience and background on the subject?

Carrikature
06-06-2014, 01:00 PM
What is your experience and background on the subject?

Multifarious. I did robotics competitions in high school, though those were remote controlled. I had similar robotics work at a higher level in college, working on a team that designed, built, and programmed an autonomous robot that had to navigate colored lines. I've done programming at different levels and in different languages, including assembly, C++ and php, both as part of robotics and as separate image recognition (finding and counting shapes in an image). I've not spent much time with PLCs in my career as an electrical engineer, but the underlying logic and principles I've had classes on as part of the degree program. I've toyed with and aided in development of chatbots for use in MMORPGs. I've had classes on various aspects of neuroscience, and I consider myself at roughly journeyman level in general philosophy with focuses on philosophies of mind, morality and language (mind and language at least are relevant to this subject). I have a general interest in neuroscience, consciousness (human and non-human), and human development/learning, and I've read quite a bit of scholarly work on those subjects. Further, though it certainly counts for much less, I've read a ton of science fiction and fantasy, and I dare say I'm familiar with most if not all common portrayals of sentience/consciousness/intelligence in machines, humans and non-humans.

Sparko
06-06-2014, 01:05 PM
Multifarious. I did robotics competitions in high school, though those were remote controlled. I had similar robotics work at a higher level in college, working on a team that designed, built, and programmed an autonomous robot that had to navigate colored lines. I've done programming at different levels and in different languages, including assembly, C++ and php, both as part of robotics and as separate image recognition (finding and counting shapes in an image). I've not spent much time with PLCs in my career as an electrical engineer, but the underlying logic and principles I've had classes on as part of the degree program. I've toyed with and aided in development of chatbots for use in MMORPGs. I've had classes on various aspects of neuroscience, and I consider myself at roughly journeyman level in general philosophy with focuses on philosophies of mind, morality and language (mind and language at least are relevant to this subject). I have a general interest in neuroscience, consciousness (human and non-human), and human development/learning, and I've read quite a bit of scholarly work on those subjects. Further, though it certainly counts for much less, I've read a ton of science fiction and fantasy, and I dare say I'm familiar with most if not all common portrayals of sentience/consciousness/intelligence in machines, humans and non-humans.

sounds a lot like my background, but I never messed with robotic competitions, and wasn't much of a programmer, other than some simple programs and programming PLCs. I worked for a company that made automated cleaning machines, controlled by PLCs and some robotics (for loading and unloading the machines) - and I did electrical design and electronics repair. I have always been interested in sci-fi, neuroscience, AI, and such.

Jedidiah
06-06-2014, 02:27 PM
There's a lot more than that. We're not modifying states of mind and behaviors on supposition alone. We're not initiating actions in mice (like stopping what they're doing and going to get a drink of water) based on supposition alone. We might still be lacking a complete explanation, but we've got a lot more than just supposition from which to draw.

You said, "I've seen robots that are actively learning." To which I respond, Learning is not sentience. Further you said, "Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters." My response is that memory is not sentience.

In fact your post was not very responsive. I referred to sentience, which as you may recall, is the subject of the OP. You addressed intellegence, not sentience. You responded that we have an incomplete picture but that "I've seen robots that are actively learning." Learning, and responding to situations is simply more complex learning, is not sentience. What more do you have. I repeat Sparko's claim that we are no closer to a true sentient artificial mind. Unless you can describe what more we need to do to accomplish artificial sentience, you have done no more than repeat what might have been said 50 years ago.

Originally I said: "The goal is definitely simulation. There is no actual explanation for sentience in existence. " You have not taken a single step to disabuse me of that opinion. How is Sparko's statement that "We are not any closer to a true sentient artificial mind now than we were 100 years ago," shown to be "downright false?"

Jorge
06-06-2014, 03:31 PM
"We are not any closer to a true sentient artificial mind now than we were 100 years ago,"

It has been stated that the first step towards solving a problem is to understand it. Here you go ...
We are 'closer' than we were 100 years ago in the sense that 100 years ago we were clueless about the sheer magnitude of what true AI would require. Only in that sense are we any closer.

Jorge

oxmixmudd
06-06-2014, 03:51 PM
It has been stated that the first step towards solving a problem is to understand it. Here you go ...
We are 'closer' than we were 100 years ago in the sense that 100 years ago we were clueless about the sheer magnitude of what true AI would require. Only in that sense are we any closer.

Jorge

I wonder a bit on this. Do we really understand intelligence. Are we really closer to understanding it at the level you imply here than we were 100 years ago? Suppose one could write a system that could mimic human behavior in such a way that no matter what you did, no matter how one tested it, one could not tell the difference between how it responded and how a human would respond. Do we know enough to know it is not also self aware in the sense that we are?

IOW, would the knowledge we now have make it any easier for us to draw that distinction than it would be for someone 100 years ago to do the same.

Jim

Sparko
06-06-2014, 03:55 PM
It has been stated that the first step towards solving a problem is to understand it. Here you go ...
We are 'closer' than we were 100 years ago in the sense that 100 years ago we were clueless about the sheer magnitude of what true AI would require. Only in that sense are we any closer.

Jorge


correct.

We know much more about how little we actually know.

Truthseeker
06-06-2014, 05:06 PM
I wonder a bit on this. Do we really understand intelligence. Are we really closer to understanding it at the level you imply here than we were 100 years ago? Suppose one could write a system that could mimic human behavior in such a way that no matter what you did, no matter how one tested it, one could not tell the difference between how it responded and how a human would respond. Do we know enough to know it is not also self aware in the sense that we are?Turing test. Some people claimed success, iirc.

Carrikature
06-06-2014, 07:09 PM
You said, "I've seen robots that are actively learning." To which I respond, Learning is not sentience. Further you said, "Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters." My response is that memory is not sentience.

In fact your post was not very responsive. I referred to sentience, which as you may recall, is the subject of the OP. You addressed intellegence, not sentience. You responded that we have an incomplete picture but that "I've seen robots that are actively learning." Learning, and responding to situations is simply more complex learning, is not sentience. What more do you have. I repeat Sparko's claim that we are no closer to a true sentient artificial mind. Unless you can describe what more we need to do to accomplish artificial sentience, you have done no more than repeat what might have been said 50 years ago.

Originally I said: "The goal is definitely simulation. There is no actual explanation for sentience in existence. " You have not taken a single step to disabuse me of that opinion. How is Sparko's statement that "We are not any closer to a true sentient artificial mind now than we were 100 years ago," shown to be "downright false?"

Nowhere have I claimed that learning or database-lookup is sentience. Your claim is that we are no closer to sentience. I gave a "long winded answer" that elaborates on what sentience is, what intelligence is, how the two are related, and most importantly how progress in artificial intelligence is also progress towards artificial sentience. I have taken 'a single step' (and then some), even if you choose to ignore it.

Carrikature
06-06-2014, 07:10 PM
We know much more about how little we actually know.

This I would definitely agree with.

Carrikature
06-06-2014, 07:21 PM
I wonder a bit on this. Do we really understand intelligence. Are we really closer to understanding it at the level you imply here than we were 100 years ago? Suppose one could write a system that could mimic human behavior in such a way that no matter what you did, no matter how one tested it, one could not tell the difference between how it responded and how a human would respond. Do we know enough to know it is not also self aware in the sense that we are?

IOW, would the knowledge we now have make it any easier for us to draw that distinction than it would be for someone 100 years ago to do the same.

Jim

We understand intelligence much more than many are aware of and/or are willing to admit. Of course, intelligence carries with it certain assumptions. The problems with IQ tests as measures of intelligence are the same with comparing all aptitudes. It would be folly to suggest we are more intelligent than some creatures simply because we can solve problems they cannot. They are quite capable of solving problems we cannot. Intelligence as a claim to ability is in the same boat as every other evolutionary outcome: success is relative. It's important, then, to express intelligence in terms of possessing certain components rather than insisting on an arbitrary metric of skill in one or more of them.

For your hypothetical, I think you would be interested in Searle's Chinese Room Argument (http://plato.stanford.edu/entries/chinese-room/), if you're not already familiar with it.

Carrikature
06-06-2014, 07:36 PM
sounds a lot like my background, but I never messed with robotic competitions, and wasn't much of a programmer, other than some simple programs and programming PLCs. I worked for a company that made automated cleaning machines, controlled by PLCs and some robotics (for loading and unloading the machines) - and I did electrical design and electronics repair. I have always been interested in sci-fi, neuroscience, AI, and such.

I hesitate to reference a computer as an analogy but it's a useful one. Just as the computer has a central processor, a processor for graphics, and limited inputs from peripheral devices, so too does our brain. That much is readily understood in current neuroscience. The difference is that our brain is orders of magnitude more complex than the computers we possess today. So far as I can tell from my experience and understanding, consciousness emerges once a certain level of complexity is achieved (and we haven't found that threshold). That seems to be pretty strongly supported by our increased understanding of consciousness in non-human animals. There's not a magic threshold per se, but an increase in complexity pushes organisms ever closer to what we recognize as consciousness or sentience.

I don't find that anything in current artificial consciousness research is close to what would be required. However, to reach that complexity requires refining and expanding upon our capabilities in all aspects of electronic communication and processing. In the same way that advancements in pattern recognition and speech processing are also advancements towards intelligence, so too are advancements in small-scale technologies pushing us closer to the ability to process ever more amounts of information both in terms of speed and in terms of multiple, parallel routines.

I'd be the first to say that we're a very, very long way from achieving the amount of complexity required. I don't see any way in which "no closer" is a true statement, though. Some might say that it's an unattainable goal, and they might be right. The only obstacle I really see is time and effort.

Jedidiah
06-06-2014, 09:20 PM
. . .I gave a "long winded answer" that elaborates on what sentience is, what intelligence is, how the two are related, . . .

I seem to have missed this part.

Carrikature
06-07-2014, 12:06 PM
I seem to have missed this part.

Ok. :smile:

Sparko
06-07-2014, 01:54 PM
I hesitate to reference a computer as an analogy but it's a useful one. Just as the computer has a central processor, a processor for graphics, and limited inputs from peripheral devices, so too does our brain. That much is readily understood in current neuroscience. The difference is that our brain is orders of magnitude more complex than the computers we possess today. So far as I can tell from my experience and understanding, consciousness emerges once a certain level of complexity is achieved (and we haven't found that threshold). That seems to be pretty strongly supported by our increased understanding of consciousness in non-human animals. There's not a magic threshold per se, but an increase in complexity pushes organisms ever closer to what we recognize as consciousness or sentience.

I don't find that anything in current artificial consciousness research is close to what would be required. However, to reach that complexity requires refining and expanding upon our capabilities in all aspects of electronic communication and processing. In the same way that advancements in pattern recognition and speech processing are also advancements towards intelligence, so too are advancements in small-scale technologies pushing us closer to the ability to process ever more amounts of information both in terms of speed and in terms of multiple, parallel routines.

I'd be the first to say that we're a very, very long way from achieving the amount of complexity required. I don't see any way in which "no closer" is a true statement, though. Some might say that it's an unattainable goal, and they might be right. The only obstacle I really see is time and effort.

I don't think it has to do with reaching a certain level of complexity. There is nothing to support that. In fact a lot of lower lifeforms, such as mice, have conciousness and self-awareness.

Jedidiah
06-07-2014, 02:15 PM
Ok. :smile:

I meant that I missed where you explained what sentience is. I even went back and reread all your posts. Can you explain that to me?

Carrikature
06-07-2014, 08:55 PM
I don't think it has to do with reaching a certain level of complexity. There is nothing to support that. In fact a lot of lower lifeforms, such as mice, have conciousness and self-awareness.

Mice do not have self-awareness. Consciousness and self-awareness aren't the same thing. Most vertebrate animals are believed to have consciousness as defined by ability to experience suffering (and other emotions). However, it's a mistake to think mice do not have complex brains just because it's not as complex as our own. In order to see levels of complexity, you need to examine the entire spectrum from single-celled organisms to humans. Perhaps more importantly, the argument for complexity as a basis for consciousness relies on emergent phenomena in all manner of topics.

Carrikature
06-07-2014, 09:24 PM
I meant that I missed where you explained what sentience is. I even went back and reread all your posts. Can you explain that to me?

Sentience is the ability to have a subjective sensory experience (qualia). The second paragraph of my post #35 goes into a lot more detail, but this is sentience in a nutshell.

Possibly barring a few carnivorous exceptions, plants aren't capable of sentience at all. They have no sensory apparatus. Single-celled organisms with eyespots might be considered capable of rudimentary sentience, but I doubt most people would call it such. Organisms like sponges are incapable of sentience. Computer programs are more than capable of receiving information from peripheral devices, but this would not be sufficient. Robots and other systems equipped with sensory apparatus like thermometers and cameras could be capable of sentience given some degree of autonomy. The biggest difference, afaict, is between instructed polling of peripherals and real-time sampling, processing and response generation. The iCub I mentioned earlier is getting pretty close to that last.

Roy
06-08-2014, 02:49 AM
Possibly barring a few carnivorous exceptions, plants aren't capable of sentience at all. They have no sensory apparatus.Hmmm. Plants may not be sentient, but they do have sensory apparatus. Most flowers can detect light sufficiently to turn towards the sun. Plants can sense the level of moisture and carbon dioxide in the air and adjust the size of the stomata in their leaves accordingly. Seeds sense temperature and moisture levels and germinate accordingly. Some plants react to the presence of chemicals released by their neighbours. Venus fly-traps aren't the only plants that react to their leaves being touched.

Plants may not be able to see or hear, but they can scent, taste and feel.

Roy

Carrikature
06-08-2014, 07:17 AM
Hmmm. Plants may not be sentient, but they do have sensory apparatus. Most flowers can detect light sufficiently to turn towards the sun. Plants can sense the level of moisture and carbon dioxide in the air and adjust the size of the stomata in their leaves accordingly. Seeds sense temperature and moisture levels and germinate accordingly. Some plants react to the presence of chemicals released by their neighbours. Venus fly-traps aren't the only plants that react to their leaves being touched.

Plants may not be able to see or hear, but they can scent, taste and feel.

Roy

I stand corrected. Don't you think there's a difference between being the actor and being acted upon, though? I wouldn't have thought it accurate to describe seeds as acting, rather that the conditions have to be right for certain automatic processes to function.

Truthseeker
06-08-2014, 12:29 PM
If someone discusses sentience or claims that such a thing has it, I would ask, does that involve consciousness? If yes, I would then ask for a definition of consciousness and ask how do you know that thing is conscious. Intelligence can be measured, just give the thing a test of intelligence. Of course whatever kind of intelligence is being measured depends on what test you are giving the thing. So, an infinity of possible definitions of intelligence. Are some machines such as robots intelligent? Well, yes, if they do the job that we want.

Roy
06-08-2014, 01:03 PM
I stand corrected. Don't you think there's a difference between being the actor and being acted upon, though? I wouldn't have thought it accurate to describe seeds as acting, rather that the conditions have to be right for certain automatic processes to function.Definitely - but most if not all senses involve being acted upon, whether being impacted by photons, air, macroscopic objects or volatile chemicals. Sure, we can point our eyes (and to a lesser extent ears) in a particular direction, or place objects on our tongue or near our nose, but that just changes what we're sensing from, not what we sense.

But maybe germination was a bit too extreme an example, as it's probably more like the process that causes blisters to form on heated skin than it is to the process that causes nerve impulses based on the frequency of air pressure changes.

Roy

Carrikature
06-09-2014, 06:26 AM
If someone discusses sentience or claims that such a thing has it, I would ask, does that involve consciousness? If yes, I would then ask for a definition of consciousness and ask how do you know that thing is conscious. Intelligence can be measured, just give the thing a test of intelligence. Of course whatever kind of intelligence is being measured depends on what test you are giving the thing. So, an infinity of possible definitions of intelligence. Are some machines such as robots intelligent? Well, yes, if they do the job that we want.

Sentience is required for consciousness, but sentience and consciousness are not identical. For a good discussion on consciousness, check out the SEP article (http://plato.stanford.edu/entries/consciousness/).

Knowing if another entity is conscious is an issue that's as yet unresolved, but there are some hints. In mammals, we can point to brain structures in non-human animals that are analogous to human brain structures that are activated as part of emotional states. Observation of decision-making and intentionality are strong indicators. The use of the mirror test to determine an entity's ability to recognize itself in a mirror is a pretty good indication of self-awareness. It's useful to look at human development for clues, too. Humans are unable to pass the mirror test until a certain age. Milestones in human development include decision-making and pattern-recognition/sorting.

Carrikature
06-09-2014, 06:32 AM
Definitely - but most if not all senses involve being acted upon, whether being impacted by photons, air, macroscopic objects or volatile chemicals. Sure, we can point our eyes (and to a lesser extent ears) in a particular direction, or place objects on our tongue or near our nose, but that just changes what we're sensing from, not what we sense.

But maybe germination was a bit too extreme an example, as it's probably more like the process that causes blisters to form on heated skin than it is to the process that causes nerve impulses based on the frequency of air pressure changes.

Roy

Ah, it wasn't my intent to imply that the senses aren't acted upon. I'm just not sure what level of response to stimuli counts as self-directed. I think we agree that germination wouldn't count. I'm not sure about plants facing the sun, since I thought that was a gravity thing. Admittedly, examining self-direction gets into consciousness and self-awareness, not sentience.

Sparko
06-10-2014, 11:15 AM
Apparently a computer just passed the Turing test:

http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx

http://www.extremetech.com/extreme/183851-eugene-goostman-becomes-the-first-computer-to-pass-the-turing-test-convincing-judges-that-hes-a-13-year-old-boy

rogue06
06-10-2014, 11:19 AM
Apparently a computer just passed the Turing test:

http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx

http://www.extremetech.com/extreme/183851-eugene-goostman-becomes-the-first-computer-to-pass-the-turing-test-convincing-judges-that-hes-a-13-year-old-boy
Considering the conversational ability of today's average thirteen year old that challenge isn't as difficult as it once was.

Jedidiah
06-10-2014, 01:50 PM
That means that a computer can be programmed to fool people into thinking it is really human. Nothing more.

KingsGambit
06-10-2014, 01:55 PM
I saw that, but a friend of mine posted another article that suggested that the Turing test claim isn't all that legitimate (and the claimant does have a history of making dubious claims to the media):

https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml

Truthseeker
06-10-2014, 03:31 PM
The posts here remind me what Isaac Asimov said, something about the art of fooling people into thinking that he was an expert in any given subject.

Sparko
06-10-2014, 04:17 PM
I saw that, but a friend of mine posted another article that suggested that the Turing test claim isn't all that legitimate (and the claimant does have a history of making dubious claims to the media):

https://www.techdirt.com/articles/20140609/07284327524/no-computer-did-not-pass-turing-test-first-time-everyone-should-know-better.shtml

I believe that. It just means that it has a really good database. Also it is easier to fool people if they think they are talking to someone dumb, or foreign, or in this case, a child. They can give odd answers and you would tend to be more likely to let it slide by, especially if you were initially under the impression that you were speaking with a person.

Teallaura
06-10-2014, 05:35 PM
I believe that. It just means that it has a really good database. Also it is easier to fool people if they think they are talking to someone dumb, or foreign, or in this case, a child. They can give odd answers and you would tend to be more likely to let it slide by, especially if you were initially under the impression that you were speaking with a person.
Yeah, that's been working for you for years! :yes:




:uhoh: Did I say that out loud?

KingsGambit
06-10-2014, 06:51 PM
I believe that. It just means that it has a really good database. Also it is easier to fool people if they think they are talking to someone dumb, or foreign, or in this case, a child. They can give odd answers and you would tend to be more likely to let it slide by, especially if you were initially under the impression that you were speaking with a person.

I saw some of the transcripts from that chatbot and what really got me: What 13 year old actually uses proper spelling/grammar in chat?

Sparko
06-10-2014, 08:13 PM
Yeah, that's been working for you for years! :yes:




:uhoh: Did I say that out loud?


Does not compute. searching... searching... reset to initial paramaters. rebooting...

Hi there! do you like smores?

Teallaura
06-10-2014, 08:17 PM
Yes, go fish.

rwatts
06-10-2014, 08:24 PM
I found this really interesting.I think, like so much of science, AI, or the prospect of it, is utterly and thoroughly amazing. However, there is that worrying downside, and I think it's this way with all of science. The technology it spawns can have a dangerous alternative face to it.

This is not only the case with AI, but it's so with atomic theory (splitting the atom), genetics (ability to manipulate the genome and to make new life), electrical theory (ability to make very small sensors), and so on.

If, as inquisitive humans we could only use science to address really interesting questions, and apply technology to that end, as opposed to learning how to use it for our darker side.

A really intelligent critique of science and the technology it gives rise to, is often lacking.

rogue06
06-10-2014, 11:29 PM
Does not compute. searching... searching... reset to initial paramaters. rebooting...

Hi there! do you like smores?
Bacon is good for me, is good for me :smile:

Sparko
06-11-2014, 05:33 AM
Yes, go fish.

WOPR: Greetings Professor Falken. Shall we play a game?

Chess
Poker
Fighter Combat
Guerrilla Engagement
Desert Warfare
Air-to-Ground Actions
Theaterwide Tactical Warfare
Theaterwide Biotoxic and Chemical Warfare

Global Thermonuclear War

__

Jorge
06-11-2014, 06:12 AM
Apparently a computer just passed the Turing test:

http://www.reading.ac.uk/news-and-events/releases/PR583836.aspx

http://www.extremetech.com/extreme/183851-eugene-goostman-becomes-the-first-computer-to-pass-the-turing-test-convincing-judges-that-hes-a-13-year-old-boy

Not so.

The "passing" was actually of a perverted, invalid version of the Turing Test.
The true Turing Test has not been passed and, IMHO, never will be.

Jorge

Sparko
06-11-2014, 06:35 AM
Not so.

The "passing" was actually of a perverted, invalid version of the Turing Test.
The true Turing Test has not been passed and, IMHO, never will be.

Jorge

how was it "perverted and invalid?"

Sparko
06-11-2014, 06:37 AM
Here is a link to the bot if anyone is interested. It's been really busy since the article came out so good luck getting it to load:

http://www.princetonai.com/bot/bot.jsp

Sparko
06-11-2014, 06:51 AM
LOL. The chatbot is an idiot. There is no way it would pass for a human. How the hell did it pass a turing test? If I didn't already know it was a bot, I would think it was a bot after talking to it for 10 seconds

Teallaura
06-11-2014, 09:43 AM
WOPR: Greetings Professor Falken. Shall we play a game?

Chess
Poker
Fighter Combat
Guerrilla Engagement
Desert Warfare
Air-to-Ground Actions
Theaterwide Tactical Warfare
Theaterwide Biotoxic and Chemical Warfare

Global Thermonuclear War

__


Thermonuclear war - drop nuke on your server. I win! :smug:

Teallaura
06-11-2014, 09:44 AM
LOL. The chatbot is an idiot. There is no way it would pass for a human. How the hell did it pass a turing test? If I didn't already know it was a bot, I would think it was a bot after talking to it for 10 seconds
Oh now, be fair - several posters here can't pass for human, either... :teeth:

Carrikature
06-11-2014, 10:09 AM
LOL. The chatbot is an idiot. There is no way it would pass for a human. How the hell did it pass a turing test? If I didn't already know it was a bot, I would think it was a bot after talking to it for 10 seconds

I agree. I got the impression that allowances were made for language barrier issues, but even that is no excuse.



Oh now, be fair - several posters here can't pass for human, either... :teeth:

What we've really seen is a new application of Poe's Law. :yes:

Sparko
06-11-2014, 10:17 AM
Thermonuclear war - drop nuke on your server. I win! :smug:

I'm sorry, but I did not get that. Did you say "Thermal Underwear?"

After the tone please repeat your answer, or press 0 to nuke Alabama.

Your call is important to us. Please hold while we ignore it.

Carrikature
06-11-2014, 10:41 AM
Your call is important to us. Please hold while we ignore it.


http://www.youtube.com/watch?v=-UvvkWd_dR4