Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Religion and Spirituality > Atheism and Agnosticism
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 08-13-2014, 11:46 AM
 
5,458 posts, read 6,714,569 times
Reputation: 1814

Advertisements

Quote:
Originally Posted by OzzyRules View Post
My question is, at what point does an IPhone begin to have feelings? Do you really think a machine can become self aware?
I think the biggest problem here is understanding exactly what consciousness is. For example, I can imagine lots of different definitions of consciousness which don't require emotional responses, so your first question is potentially off topic in this thread.

Perhaps you could explain exactly what you mean by consciousness and which parts of the process you think a machine would have difficulty reproducing.
Reply With Quote Quick reply to this message

 
Old 08-13-2014, 12:50 PM
 
Location: Northeastern US
19,979 posts, read 13,459,195 times
Reputation: 9918
Quote:
Originally Posted by Sophronius View Post
I don't think a machine could become conscious in any way similar because,
... the robot would require an entirely new program each time a simulated or stimulated experience is introduced ... so a program would need to be able to develop new programs which have nothing to do with the initially understood program.
Not really. It would just be operating according to a conceptual model, and it would modify the conceptual model. The same processes still operate upon that model.

Back in the early days of computers, programs were very inflexible and static and even physically hard-wired, but we moved beyond that decades ago and have put many layers of abstractions between the user and the physical hardware. The graphical user interface you are no doubt using to read this is a perfect example. In the old days you would sit at a terminal which would offer a prompt and you would key in a response (before that, it was even cruder: you would devise programs and inputs and submit them to a keypunch operator who would produce a deck of punched cards to read into the computer). Now instead the computer offers you a visual environment and responds in real time to your manipulation of the mouse / trackpad and keyboard and possibly even voice commands. That environment constantly adapts to the inputs and in many cases learns your habits and finishes phrases and sentences for you. The same software is still running on your computer that was running on it when you bought it, but it has stored experiences and responded to them nevertheless. It's a crude version of what we're talking about for AI, but the same basic principle.
Reply With Quote Quick reply to this message
 
Old 08-13-2014, 01:03 PM
 
5,458 posts, read 6,714,569 times
Reputation: 1814
Quote:
Originally Posted by Gaylenwoof View Post
I already know that you will never agree with the core reason that I think consciousness presents a serious ontological problem. We've all been down this path before, and we already know that we will endlessly bang our heads into a brick wall trying to understand each other, but one thing that I like about the "enactive" approach to consciousness is that it seems to highlight the source of the epistemological problem (as I see it).

Here is what I mean: Suppose, just for the sake of argument, that the properties of conscious experience that I'm calling the "qualitative" or "phenomenal" aspects of conscious experience are, in fact, the intentional actions of certain complex types of physical systems. (Basically, I'm talking about "agency" on the part of the system - actions performed by the system that are causally rooted within the system itself - in the sense that a self-organizing system generates its patterns "internally" based on the dynamic nature of the system itself.) Hopefully it is clear that only the agent of the activity can perform the activity. Certainly physical systems can mimic each other, e.g., you can I can play the same song on a guitar - in principle we can perform in perfectly identical ways - but, if you are the agent of your actions, then I cannot be the agent of your actions.

Agency, in other words, amounts to a necessarily unique perspective. If you and I are physically distinct agents, then your intentional actions cannot be numerically identical to my intentional actions. If you and I are numerically distinct physical systems and the qualitative character of my consciousness essentially just is my internally self-generated actions as an agent, and your qualitative experience just is your internally self-generated actions as an agent, then logic demands that the qualitative characteristics of my consciousness as such are not directly accessible to you.

But notice that "internally self-generated actions" are physical. Thus I'm not denying physicalism. I'm not saying that internally self-generated actions "produce" qualitative experiences; I'm saying they are qualitative experiences. But if this is the case, then logic requires that I cannot directly experience your qualitative experiences. I can observe various objectively accessible aspects of your experiences (your neural activity along with your environment), but I cannot experience your self-generated actions from your perspective - I can only study your agency from my own perspective - via my own self-generated actions, i.e., my own qualitative experience.

For me (but apparently not for you) there is still a further question
Wait, there was a question? I'm guessing for some reason you think that not being able to take over the body of someone else is a challenge to some particular understanding of brain function or another. I personally don't know of any which require this to be possible, but even if they exist I doubt that they are mainstream enough to worry about.

And if that's not a problem for human consciousness, I don't see why it would limit machine consciousness either, so it seems to be off topic.

Quote:
- a desire to conceptually link the physical concepts involved in the notion of "self-generated actions" to what I understand as "qualitative experience." I cannot yet give an explicit logical derivation of the qualitative nature of experience from the self-generated activity I'm calling "agency."
One approach is to look at the research which hints that the qualitative feeling of agency comes from watching our bodies do something at about the same time our conscious minds stumble across a thought of it. The interesting part is that the feeling happens even when other people are suggesting those thoughts and are actually the agents responsible for the action. Could be that these feelings (which I think is one of the many definitions you're jumping between for agency) aren't actually connected to the brain processes which generate those actions, at least not in all cases. Probably works well enough in practice, though, even though we can trick it in some cases.

No way you'd get that from explicit logical derivations, and yet that's how the brain actually seems to work. Just highlights the relative utility of philosophy versus science in generating knowledge on this issue.

Also, there's no reason we'd have to mimic this sort of mechanism in a self-aware machine, so I'm not sure how it relates to the topic at hand.

Quote:
This is where fundamental ontology becomes important. I know that this is not something you care about, and that's fine. I'm not really trying to defeat you in argument over qualia, I'm simply trying to share my own perspective on what I would like to do. We can't do much more than agree to disagree about whether this extra step is worth pursuing.
Explaining how qualitative feelings about conscious states arise and how they relate to the rest of brain function is interesting work. Based on past failures of philosophy at generating knowledge, though, I doubt we're going to get there via "explicit logical derivation" or tossing around important-sounding terms like "fundamental ontology".

And again, just because we don't know the details of how this arises in humans doesn't mean we can't create it using some other mechanism in self-aware machines, so I don't see how this is on topic here.

Quote:
In any case, if my identity statement (phenomenal qualities = self-organizing "possible-futures-imagining" goal-directed activity) implies that "conscious" machines will need to be engineered in such a way that they are self-organizing, "possible-futures-imagining" goal-directed agents. I suspect this means that they will need to be "embodied" in some fashion because it seems to me that "simulating agency" is not logically interchangeable with "being an agent."
Can you explain which of these your brain does, and more importantly, how I can determine that fact? Seems like if there is just a vast logical gap it should useful to know the difference.

We seem to give other people a break on these sorts of questions even though we can't prove that other minds actually exist. But think that maybe we could create an artificial mind and people go nuts over the idea. It goes back to my earlier idea that people like to think that consciousness needs to be magical somehow.

Quote:
A brain-in-a-vat could simulate an embodied brain, but I'm not sure whether or not this activity would count the same as "being" and agent. This ultimately goes back to my "extended mind hypothesis" suggesting that phenomenal consciousness is a holistic property of the world - not a property that can be fully reduced to any one particular part of the world at a particular moment.
If you're saying that internal brain function is influenced by external factors, then this is hardly interesting or new to anyone.

But again, most of this is off topic as far as I can tell. How is any of this supposed to limit the creation of a conscious machine?
Reply With Quote Quick reply to this message
 
Old 08-13-2014, 02:02 PM
 
348 posts, read 294,472 times
Reputation: 37
Quote:
Originally Posted by mordant View Post
Not really. It would just be operating according to a conceptual model, and it would modify the conceptual model. The same processes still operate upon that model.

Back in the early days of computers, programs were very inflexible and static and even physically hard-wired, but we moved beyond that decades ago and have put many layers of abstractions between the user and the physical hardware. The graphical user interface you are no doubt using to read this is a perfect example. In the old days you would sit at a terminal which would offer a prompt and you would key in a response (before that, it was even cruder: you would devise programs and inputs and submit them to a keypunch operator who would produce a deck of punched cards to read into the computer). Now instead the computer offers you a visual environment and responds in real time to your manipulation of the mouse / trackpad and keyboard and possibly even voice commands. That environment constantly adapts to the inputs and in many cases learns your habits and finishes phrases and sentences for you. The same software is still running on your computer that was running on it when you bought it, but it has stored experiences and responded to them nevertheless. It's a crude version of what we're talking about for AI, but the same basic principle.
Good post and appreciated.

Last edited by Sophronius; 08-13-2014 at 02:15 PM..
Reply With Quote Quick reply to this message
 
Old 08-13-2014, 04:37 PM
 
12,918 posts, read 16,859,470 times
Reputation: 5434
Quote:
Originally Posted by KCfromNC View Post
I think the biggest problem here is understanding exactly what consciousness is. For example, I can imagine lots of different definitions of consciousness which don't require emotional responses, so your first question is potentially off topic in this thread.

Perhaps you could explain exactly what you mean by consciousness and which parts of the process you think a machine would have difficulty reproducing.
There is only one definition of real consciousness I am aware of. Referring to being self-aware. Ability to feel emotional pain or physical pain. Not the simulation of feelings.
Reply With Quote Quick reply to this message
 
Old 08-14-2014, 05:37 AM
 
5,458 posts, read 6,714,569 times
Reputation: 1814
Quote:
Originally Posted by OzzyRules View Post
There is only one definition of real consciousness I am aware of. Referring to being self-aware. Ability to feel emotional pain or physical pain.
Animals of all types feel pain, but not all are self aware. So you're describing at least two different things here. Which definition do you want to use?

Quote:
Not the simulation of feelings.
Please prove you are actually feeling feelings rather than having a simulated feeling of having feeling.
Reply With Quote Quick reply to this message
 
Old 08-14-2014, 05:45 AM
 
3,636 posts, read 3,424,497 times
Reputation: 4324
Quote:
Originally Posted by OzzyRules View Post
There is only one definition of real consciousness I am aware of. Referring to being self-aware. Ability to feel emotional pain or physical pain. Not the simulation of feelings.
But what I was getting at in my last post which you skipped and ignored is how would they be any more - or less - "simulated" than our own. If all we are is a machine - albeit a biological ones - then why is our subjectivity any more or less real than another machines or a non biological type?

Johhny Depp was in a movie recently about Artificial Intelligence. A Government agent upon meeting the AI asked "Can you prove you are self aware?" to which the AI just replied "Can you?"
Reply With Quote Quick reply to this message
 
Old 08-14-2014, 07:53 AM
 
Location: Kent, Ohio
3,429 posts, read 2,731,740 times
Reputation: 1667
Quote:
Originally Posted by KCfromNC View Post
But again, most of this is off topic as far as I can tell. How is any of this supposed to limit the creation of a conscious machine?
Hopefully I did not give the impression that we cannot build conscious machines. Just the opposite: my comments were meant to suggest how to build a conscious machine (well...at least roughly outline a few necessary conditions for building one). The title of this thread is "Consciousness in a robot?" I'm saying: yes, in principle I think this might be possible. But I am implying an important distinction between "robot" and "machine" in a more general sense. A robot can interact with its environment - it is "embodied" and my suggestion has been that this may be a critical requirement for building a conscious machine; the machine needs to be embodied, like a robot or android. If my hypothesis is correct (and, note, I'm simply playing out arguments based on a tentative hypothesis), then we will never be able to build a "mainframe" style of computer like a super-duper version of Big Blue and program consciousness into it, or expect consciousness to emerge from the circuitry just because it is super-duper complex or processes massive amounts of data per second. My suggestion is that complexity and data processing speed are not sufficient, even in principle.

If my hypothesis is correct, then the minimum necessary features would be:

Great enough complexity to sustain self-organizing patterns of information processing capable of representing the physical world and exhibiting goal-directed motivations derived from interactions with the physical world. (I suspect that virtual interactions with a virtual world would not be sufficient for consciousness because the "physicality" of the world is a necessary ingredient. Thus we would never be able to program a mainframe to be conscious.)

In effect, the machine would need to have "emotions" or "feelings" - not necessarily "human" emotions, but at least it would need motivating "values" of some sort grounded in the physical nature of the world. This is why "self-organizing" complexity is a critical requirement. The physical world itself is, in essence, a self-organizing system and, per my hypothesis, the "rule-governed" nature of physical reality that accounts for its self-organizing properties is, I'd say, the ontological foundation for the qualitative "feelings" we generally associate with "being conscious." So a desktop machine that merely processes information won't become conscious, no matter how complex we build it, but a robot (or a mainframe that can interact with the world via "telepresence" in robot bodies) might.
Reply With Quote Quick reply to this message
 
Old 08-15-2014, 09:37 PM
 
63,791 posts, read 40,063,093 times
Reputation: 7869
Quote:
Originally Posted by OzzyRules View Post
There is only one definition of real consciousness I am aware of. Referring to being self-aware. Ability to feel emotional pain or physical pain. Not the simulation of feelings.
Quote:
Originally Posted by KCfromNC View Post
Animals of all types feel pain, but not all are self aware. So you're describing at least two different things here. Which definition do you want to use?
Please prove you are actually feeling feelings rather than having a simulated feeling of having feeling.
Quote:
Originally Posted by monumentus View Post
But what I was getting at in my last post which you skipped and ignored is how would they be any more - or less - "simulated" than our own. If all we are is a machine - albeit a biological ones - then why is our subjectivity any more or less real than another machines or a non biological type?
Johhny Depp was in a movie recently about Artificial Intelligence. A Government agent upon meeting the AI asked "Can you prove you are self aware?" to which the AI just replied "Can you?"
Ozzy . . . the atheists will always eventually resort to solipsism . . . despite it resounding defeat and refutation. A materialist or empirical reductionist is at heart a solipsist-in-hiding under the atheist label. I have been unable to understand how anyone can actually believe they do not exist as anything but individual chemical processes in a brain. It is something I am unable to have faith in. The only thing I know for certain is that I exist and I am not my brain or my body. Even Gaylen . . . who knows enough about the philosophical issues surrounding our sense of being . . . somehow manages to reconcile himself to such lack of existence. After all . . . when you say something is an illusion . . . you are saying it doesn't exist.
Reply With Quote Quick reply to this message
 
Old 08-16-2014, 02:39 AM
 
Location: S. Wales.
50,087 posts, read 20,700,397 times
Reputation: 5929
Of course that is not really Solipsism, (I like the way you pin it on us and then use it to imply defeatism ) but perhaps just using it as an analogy. I thought we had agreed that everything is an illusion, but it is a real one.

Nothing is as it seems to us, but it is what it is and we know what it is, even if we don't know everything about it.

This is known fact, which is why your objections don't stand up for a minute. And of course it is nothing to do with momentus' post about, there seems no serious objection to the idea that the same sort of consciousness, mind and thought that we have should be duplicated or at least closely approximated, in a machine.

KC makes a useful point. Some creatures react to pain stimuli as a signal that there is something to be avoided. This is an evolved survival mechanism of the same order as our firing wildly at a bush with night jitters leading us to fear that it the enemy creeping up on us.

And the reaction of the worm or mollusc are a biological development of the chemical reactions that bind molecules together. When this is understood and the obvious differences do not fool us into thinking that they somehow must have different origins or causes, the whole consciousness thing becomes clearer.

Of course, this does leave room for your idea that all of this consciousness is interconnected and adds up to a cosmic - sized intelligence AKA God. I have never denied that. It also leaves room for the idea that this is not the case.

My objection to your theory is that you have insisted that it is fact, on Faith, and then tried to find ways of proving it, none of which have stood up and which too often rely on trying to do down the disbelievers. As indeed you did just now.

Last edited by TRANSPONDER; 08-16-2014 at 02:48 AM..
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Religion and Spirituality > Atheism and Agnosticism
Similar Threads

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top