Announcement

Collapse

Natural Science 301 Guidelines

This is an open forum area for all members for discussions on all issues of science and origins. This area will and does get volatile at times, but we ask that it be kept to a dull roar, and moderators will intervene to keep the peace if necessary. This means obvious trolling and flaming that becomes a problem will be dealt with, and you might find yourself in the doghouse.

As usual, Tweb rules apply. If you haven't read them now would be a good time.

Forum Rules: Here
See more
See less

"AI is a dream we shouldn't be having"

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • #31
    Originally posted by Carrikature View Post
    You would be wrong, then. I've spent enough time on the subject not to make that mistake.




    I understood what the article is saying. I also know that it's an incomplete picture of what really is taking place these days. I've seen robots that are actively learning. Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters.
    I worked with robots and PLCs, and even taught a few robots for use in manufacturing. That is not sentience. That is basic programming and data storage. Neither is pattern recognition any sort of sentience. Pattern recognition in computers is merely matching photos with stored images to find the closest one in the database. Computers are really good at that sort of stuff, but it is not THINKING it is processing. And it is not sentience.

    Comment


    • #32
      Originally posted by Sparko View Post
      No it's not. Merely imitating thinking is not sentience, or self-awareness. We are no closer to that now than ever. Just because a computer can answer questions or perform tasks does not make it sentient. It doesn't initiate thoughts, it has no mind. It is not self-aware.
      A computer that can recognize speech and answer questions accordingly is a heck of a lot closer to sentience than one that requires punch cards to do basic math. An AI that possess pattern recognition and employs machine learning is a lot closer to sentience than one that can't deal with visual stimuli at all. It's as if you're looking at a marathon and claiming that passing the twelve mile mark is the same as passing the two mile mark, because neither one has reached the finished line yet.
      I'm not here anymore.

      Comment


      • #33
        Originally posted by Sparko View Post
        I worked with robots and PLCs, and even taught a few robots for use in manufacturing. That is not sentience. That is basic programming and data storage. Neither is pattern recognition any sort of sentience. Pattern recognition in computers is merely matching photos with stored images to find the closest one in the database. Computers are really good at that sort of stuff, but it is not THINKING it is processing. And it is not sentience.
        I'm not claiming it IS sentience. Pay attention. You're comparing robotics to AI, which isn't the right way to do things. It's you, not me, who is looking at automation and proclaiming sentience as impossible. That's the same mistake the OP article is making. Robotics and AI are two very different fields. Yes, a robot can be given some form of AI, but they aren't synonymous. Working with robots and PLCs isn't going to give you a good feel for what modern systems are capable of, because modern AI systems are being developed in labs.
        I'm not here anymore.

        Comment


        • #34
          Originally posted by Carrikature View Post
          A computer that can recognize speech and answer questions accordingly is a heck of a lot closer to sentience than one that requires punch cards to do basic math.
          No it is not. It is just better at imitating sentience. It is still just a program plugging away, processing input, breaking it down, looking up the answer and "printing it out" - just with speech instead of punch cards. Do you think Siri really understands what you are saying to it? That "she" thinks and reasons before answering your questions? She doesn't. She just has a really big search engine database and a good front end speech processor that breaks down the speech into search terms. No thinking involved. Just processing.

          An AI that possess pattern recognition and employs machine learning is a lot closer to sentience than one that can't deal with visual stimuli at all. It's as if you're looking at a marathon and claiming that passing the twelve mile mark is the same as passing the two mile mark, because neither one has reached the finished line yet.
          Define "sentience" for me I know we are talking about the same thing.

          Comment


          • #35
            Originally posted by Sparko View Post
            Define "sentience" for me I know we are talking about the same thing.
            Good question. Sentience, self-awareness and consciousness are often used interchangeably in layman discussions, though they're not necessarily the same thing. In artificial intelligence, sentience and self-awareness are generally treated as the same thing (though sentience isn't actually a stated goal). An AI has attained sentience/self-awareness if it can distinguish itself from its environment. Mind, that doesn't necessarily entail the ability to communicate meaningfully about its environment. There's a pretty strong consensus that non-human animals have consciousness for all that they may not pass the mirror test or be able to communicate with other species.

            Sentience in philosophy of mind generally refers to possessing a subjective experience (qualia). Of course, there's no single definition for qualia. Many would claim that qualia entails sensations, feelings, emotions or something else entirely. While it's certainly true that all known cases of qualia entail emotions, it's not clear that emotions/feelings/whatever are specifically required for a subjective experience to be classified as qualia. There's some debate over that. I would deny that emotions are required. 'Subjective experience' requires a multi-level perception of events particular to a specific individual. A better definition, in my opinion, is qualia as subjective sensation where 'sensation' refers somewhat obviously to the five senses: taste, touch, smell, vision, hearing. Your vision is different than mine, and therefore we have different qualia. That sensation may invoke emotion, but emotion is not part of the sensation itself. That sensation could also invoke memory, something machines would easily be capable of. Used in this way, there's no reason that artificial systems couldn't attain sensory experiences. However, there are very few cases that I've encountered of people actually working towards an integrated system like what would be required to achieve this. There's one, iCub, that's doing something similar (though it has a different focus).

            Of course, 'sentience' and 'intelligence' are different things. Intelligence does require pattern recognition, memory, inference and the like. However, when the general public looks at robots and discusses artificial intelligence, they're looking for an eventual replication of human experience in a machine body. That's what they really mean when they say the machines have achieved sentience. To achieve that, machines have to reach human or super-human intelligence capabilities, and that's what most of artificial intelligence has been working toward (keyword: intelligence). We have made incredible strides in that arena. You say that Siri and its ilk aren't sentient, to which I say, "of course not!". They're not supposed to be sentient, and they're not anywhere close to sentient. They are, however, incredibly intelligent. You say that Siri "just has a really big search engine database and a good front end speech processor that breaks down the speech into search terms". Combine that with Google's "Did you mean" functions, and you have a pretty high level of intelligence.

            Now, here's the answer to your question: sentience is a subjective sensory experience. To be a sensory experience, I maintain that it has to be experienced as a whole, not broken down into constituent parts. For example, visual stimulation at certain wavelengths and perceived (albeit slow) motion of foreign bodies are distinguishable parts. It's the unified whole that is the experience of seeing a sunset. Fitting pieces into a unified whole requires pattern recognition and linking (intelligence). You can't achieve sentience unless you can combine multiple aspects into a single whole, and you can't do that until you've achieved the necessary level of intelligence. That's why work on intelligence is also progress towards sentience. No, we don't have sentient systems, and we're a long way from creating them. Even so, we've made a lot of progress in that direction.
            Last edited by Carrikature; 06-06-2014, 10:33 AM.
            I'm not here anymore.

            Comment


            • #36
              Originally posted by Carrikature View Post
              Good question. Sentience, self-awareness and consciousness are often used interchangeably in layman discussions, though they're not necessarily the same thing. In artificial intelligence, sentience and self-awareness are generally treated as the same thing (though sentience isn't actually a stated goal). An AI has attained sentience/self-awareness if it can distinguish itself from its environment. Mind, that doesn't necessarily entail the ability to communicate meaningfully about its environment. There's a pretty strong consensus that non-human animals have consciousness for all that they may not pass the mirror test or be able to communicate with other species.

              Sentience in philosophy of mind generally refers to possessing a subjective experience (qualia). Of course, there's no single definition for qualia. Many would claim that qualia entails sensations, feelings, emotions or something else entirely. While it's certainly true that all known cases of qualia entail emotions, it's not clear that emotions/feelings/whatever are specifically required for a subjective experience to be classified as qualia. There's some debate over that. I would deny that emotions are required. 'Subjective experience' requires a multi-level perception of events particular to a specific individual. A better definition, in my opinion, is qualia as subjective sensation where 'sensation' refers somewhat obviously to the five senses: taste, touch, smell, vision, hearing. Your vision is different than mine, and therefore we have different qualia. That sensation may invoke emotion, but emotion is not part of the sensation itself. That sensation could also invoke memory, something machines would easily be capable of. Used in this way, there's no reason that artificial systems couldn't attain sensory experiences. However, there are very few cases that I've encountered of people actually working towards an integrated system like what would be required to achieve this. There's one, iCub, that's doing something similar (though it has a different focus).

              Of course, 'sentience' and 'intelligence' are different things. Intelligence does require pattern recognition, memory, inference and the like. However, when the general public looks at robots and discusses artificial intelligence, they're looking for an eventual replication of human experience in a machine body. That's what they really mean when they say the machines have achieved sentience. To achieve that, machines have to reach human or super-human intelligence capabilities, and that's what most of artificial intelligence has been working toward (keyword: intelligence). We have made incredible strides in that arena. You say that Siri and its ilk aren't sentient, to which I say, "of course not!". They're not supposed to be sentient, and they're not anywhere close to sentient. They are, however, incredibly intelligent. You say that Siri "just has a really big search engine database and a good front end speech processor that breaks down the speech into search terms". Combine that with Google's "Did you mean" functions, and you have a pretty high level of intelligence.

              Now, here's the answer to your question: sentience is a subjective sensory experience. To be a sensory experience, I maintain that it has to be experienced as a whole, not broken down into constituent parts. For example, visual stimulation at certain wavelengths and perceived (albeit slow) motion of foreign bodies are distinguishable parts. It's the unified whole that is the experience of seeing a sunset. Fitting pieces into a unified whole requires pattern recognition and linking (intelligence). You can't achieve sentience unless you can combine multiple aspects into a single whole, and you can't do that until you've achieved the necessary level of intelligence. That's why work on intelligence is also progress towards sentience. No, we don't have sentient systems, and we're a long way from creating them. Even so, we've made a lot of progress in that direction.

              pretty long winded answer

              The goal of AI, especially what everyone thinks of AI is to create a sentient self-aware computer/software based entity. If you merely want to define "intelligence" as being non-sentient, then yes we have made strides in that area. But we are nowhere close to creating a self-aware computer entity. One that has a subjective experience, can initiate thought, consider it's place in the world, think about its own future and be actually aware of other beings (us) - have a true "understanding" of what is going on around it.

              It's all a shell game. I don't think we can ever program a self-aware being. I don't think we could ever even download a human consciousness into a software program and it be self aware. A human brain is just way too different from how a computer operates.

              Comment


              • #37
                Originally posted by Sparko View Post
                pretty long winded answer

                The goal of AI, especially what everyone thinks of AI is to create a sentient self-aware computer/software based entity.
                Um, no.

                The goal of AI research is and has been generating systems or algorithms that can make optimal choices based on circumstances and events. This includes such things as stock-trading algorithms, plant control systems, driverless cars and so on. The goal of artificial consciousness research is to create self-aware systems. Not the same thing. The folks designing and building artificial intelligence systems for practical purposes don't want their systems to be self-aware. It would lead to distractions, unpredictability beyond the expected complexity, and lack of confidence in the finished system.

                Who would want a driverless car that might wonder what it felt like to crash?

                Roy
                Jorge: Functional Complex Information is INFORMATION that is complex and functional.

                MM: First of all, the Bible is a fixed document.
                MM on covid-19: We're talking about an illness with a better than 99.9% rate of survival.

                seer: I believe that so called 'compassion' [for starving Palestinian kids] maybe a cover for anti Semitism, ...

                Comment


                • #38
                  Originally posted by Roy View Post
                  Um, no.

                  The goal of AI research is and has been generating systems or algorithms that can make optimal choices based on circumstances and events. This includes such things as stock-trading algorithms, plant control systems, driverless cars and so on. The goal of artificial consciousness research is to create self-aware systems. Not the same thing. The folks designing and building artificial intelligence systems for practical purposes don't want their systems to be self-aware. It would lead to distractions, unpredictability beyond the expected complexity, and lack of confidence in the finished system.

                  Who would want a driverless car that might wonder what it felt like to crash?

                  Roy
                  I am talking about in the context of the OP, Roy.


                  Comment


                  • #39
                    Originally posted by Sparko View Post
                    pretty long winded answer

                    The goal of AI, especially what everyone thinks of AI is to create a sentient self-aware computer/software based entity. If you merely want to define "intelligence" as being non-sentient, then yes we have made strides in that area. But we are nowhere close to creating a self-aware computer entity. One that has a subjective experience, can initiate thought, consider it's place in the world, think about its own future and be actually aware of other beings (us) - have a true "understanding" of what is going on around it.

                    It's all a shell game. I don't think we can ever program a self-aware being. I don't think we could ever even download a human consciousness into a software program and it be self aware. A human brain is just way too different from how a computer operates.
                    It's a complicated subject. Your answer is quite a bit shorter, and it's pretty easy to see why. You're approaching this with the layman attitude that they're all the same thing. They're not. That's why you claim we've made no progress towards sentience. You're not examining what the parts and pieces are. When you do this, you concede that we've been making progress.
                    I'm not here anymore.

                    Comment


                    • #40
                      Originally posted by Roy View Post
                      Um, no.

                      The goal of AI research is and has been generating systems or algorithms that can make optimal choices based on circumstances and events. This includes such things as stock-trading algorithms, plant control systems, driverless cars and so on. The goal of artificial consciousness research is to create self-aware systems. Not the same thing. The folks designing and building artificial intelligence systems for practical purposes don't want their systems to be self-aware. It would lead to distractions, unpredictability beyond the expected complexity, and lack of confidence in the finished system.

                      Who would want a driverless car that might wonder what it felt like to crash?

                      Roy
                      Right, and I think that's one of the failings of the OP article. He's looking at artificial intelligence and trying to answer questions about artificial consciousness.
                      I'm not here anymore.

                      Comment


                      • #41
                        Originally posted by Carrikature View Post
                        Right, and I think that's one of the failings of the OP article. He's looking at artificial intelligence and trying to answer questions about artificial consciousness.
                        What is your experience and background on the subject?

                        Comment


                        • #42
                          Originally posted by Sparko View Post
                          What is your experience and background on the subject?
                          Multifarious. I did robotics competitions in high school, though those were remote controlled. I had similar robotics work at a higher level in college, working on a team that designed, built, and programmed an autonomous robot that had to navigate colored lines. I've done programming at different levels and in different languages, including assembly, C++ and php, both as part of robotics and as separate image recognition (finding and counting shapes in an image). I've not spent much time with PLCs in my career as an electrical engineer, but the underlying logic and principles I've had classes on as part of the degree program. I've toyed with and aided in development of chatbots for use in MMORPGs. I've had classes on various aspects of neuroscience, and I consider myself at roughly journeyman level in general philosophy with focuses on philosophies of mind, morality and language (mind and language at least are relevant to this subject). I have a general interest in neuroscience, consciousness (human and non-human), and human development/learning, and I've read quite a bit of scholarly work on those subjects. Further, though it certainly counts for much less, I've read a ton of science fiction and fantasy, and I dare say I'm familiar with most if not all common portrayals of sentience/consciousness/intelligence in machines, humans and non-humans.
                          I'm not here anymore.

                          Comment


                          • #43
                            Originally posted by Carrikature View Post
                            Multifarious. I did robotics competitions in high school, though those were remote controlled. I had similar robotics work at a higher level in college, working on a team that designed, built, and programmed an autonomous robot that had to navigate colored lines. I've done programming at different levels and in different languages, including assembly, C++ and php, both as part of robotics and as separate image recognition (finding and counting shapes in an image). I've not spent much time with PLCs in my career as an electrical engineer, but the underlying logic and principles I've had classes on as part of the degree program. I've toyed with and aided in development of chatbots for use in MMORPGs. I've had classes on various aspects of neuroscience, and I consider myself at roughly journeyman level in general philosophy with focuses on philosophies of mind, morality and language (mind and language at least are relevant to this subject). I have a general interest in neuroscience, consciousness (human and non-human), and human development/learning, and I've read quite a bit of scholarly work on those subjects. Further, though it certainly counts for much less, I've read a ton of science fiction and fantasy, and I dare say I'm familiar with most if not all common portrayals of sentience/consciousness/intelligence in machines, humans and non-humans.
                            sounds a lot like my background, but I never messed with robotic competitions, and wasn't much of a programmer, other than some simple programs and programming PLCs. I worked for a company that made automated cleaning machines, controlled by PLCs and some robotics (for loading and unloading the machines) - and I did electrical design and electronics repair. I have always been interested in sci-fi, neuroscience, AI, and such.

                            Comment


                            • #44
                              Originally posted by Carrikature View Post
                              There's a lot more than that. We're not modifying states of mind and behaviors on supposition alone. We're not initiating actions in mice (like stopping what they're doing and going to get a drink of water) based on supposition alone. We might still be lacking a complete explanation, but we've got a lot more than just supposition from which to draw.
                              You said, "I've seen robots that are actively learning." To which I respond, Learning is not sentience. Further you said, "Yes, we have really strong systems that are just database-lookup programs. In truth, part of intelligence is database lookup. It's more than that, of course, but you don't have inference and pattern recognition without remembrance of previous encounters." My response is that memory is not sentience.

                              In fact your post was not very responsive. I referred to sentience, which as you may recall, is the subject of the OP. You addressed intellegence, not sentience. You responded that we have an incomplete picture but that "I've seen robots that are actively learning." Learning, and responding to situations is simply more complex learning, is not sentience. What more do you have. I repeat Sparko's claim that we are no closer to a true sentient artificial mind. Unless you can describe what more we need to do to accomplish artificial sentience, you have done no more than repeat what might have been said 50 years ago.

                              Originally I said: "The goal is definitely simulation. There is no actual explanation for sentience in existence. " You have not taken a single step to disabuse me of that opinion. How is Sparko's statement that "We are not any closer to a true sentient artificial mind now than we were 100 years ago," shown to be "downright false?"
                              Micah 6:8 He has told you, O man, what is good; and what does the LORD require of you but to do justice, and to love kindness, and to walk humbly with your God?

                              Comment


                              • #45
                                Originally posted by Jedidiah View Post
                                "We are not any closer to a true sentient artificial mind now than we were 100 years ago,"
                                It has been stated that the first step towards solving a problem is to understand it. Here you go ...
                                We are 'closer' than we were 100 years ago in the sense that 100 years ago we were clueless about the sheer magnitude of what true AI would require. Only in that sense are we any closer.

                                Jorge

                                Comment

                                Related Threads

                                Collapse

                                Topics Statistics Last Post
                                Started by eider, 04-14-2024, 03:22 AM
                                54 responses
                                179 views
                                0 likes
                                Last Post rogue06
                                by rogue06
                                 
                                Started by Ronson, 04-08-2024, 09:05 PM
                                41 responses
                                166 views
                                0 likes
                                Last Post Ronson
                                by Ronson
                                 
                                Working...
                                X