Lost your password? Questions? Email admin @ theologyweb.com
Announcement
Collapse
Philosophy 201 Guidelines
Cogito ergo sum
Here in the Philosophy forum we will talk about all the "why" questions. We'll have conversations about the way in which philosophy and theology and religion interact with each other. Metaphysics, ontology, origins, truth? They're all fair game so jump right in and have some fun! But remember...play nice!
That's a whole other bag of worms. Which is why I asked the question earlier about would an AI actually be what we term a psychopath? It would not necessarily have human empathy or even human like intelligence. It might end up not caring about humans if it could survive without them, or even being hostile towards humans if it felt we were illogical or irrational compared to the way it thought. Do we really want to let something with that potential control our entire technological society? Because such an AI probably would not be limited to a physical location but would have access to our entire networks.
I honestly don't know. It's uncertain, to my knowledge, how much of our physiology is required for so-called sentience. All the examples on our world share certain basic functions. If these functions are intrinsic to sentience as we know it, it's entirely possible that a sentient AI wouldn't act terribly different than we do. Would it be a pyschopath? Not necessarily, lest you consider elephants and dolphins to be. Would it have different priorities than us? I'd expect the same priorities with different satisfaction requirements. Energy as food, reproduction as...comingling of code? Who knows. I definitely would not expect it to care any more for humans than we do for any other animal. That would probably relegate us to the role of dogs, or something...
JULY 17, 1969: On Jan. 13, 1920, Topics of The Times, an editorial-page feature of The New York Times, dismissed the notion that a rocket could function in a vacuum and commented on the ideas of Robert H. Goddard, the rocket pioneer, as follows: ''That Professor Goddard, with his 'chair' in Clark College and the countenancing of the Smithsonian Institution, does not know the relation of action to reaction, and of the need to have something better than a vacuum against which to react -- to say that would be absurd. Of course he only seems to lack the knowledge ladled out daily in high schools.''
Further investigation and experimentation have confirmed the findings of Isaac Newton in the 17th century and it is now definitely established that a rocket can function in a vacuum as well as in an atmosphere. The Times regrets the error.
As ever, Jesse
Nice wishful thinking there Jesse. You are doing the same thing shuny is. using a "god of the gaps" type argument. Building a simple neural network doesn't equate to building a sentient mind, nor does it even mean such a thing is possible. Sure there are people working toward such a goal, but that doesn't mean they will acheive it, especially with what we know currently of the brain, or with the technology we have. Just like people back in the early 1900's were predicting android (human like) robots in 20 to 30 years, and we still don't have those. Where is my robot with a positronic brain? This is the future dammit! I want my flying antigrav car too!
Your link to the quantum computer is interesting, but it is hardly more than a prototype and a very basic one at that, and it does nothing to show that a functional sentient mind can be created with one.
Keep dreaming. I hope one day we actually DO have a good sentient AI. Then we can replace you and shuny and maybe nobody would notice, except shunybot would become more rational.
Comment