Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Philosophy
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 02-27-2013, 10:23 AM
 
93 posts, read 197,071 times
Reputation: 47

Advertisements

Gaylenwoof, let's examine your four original posits:

1) Electrical engineers/biomedicists can prove that their robots actually develop a form of consciousness and have adaptive learning capabilities.

2) Many who examine animal behavior clearly see that a form of consciousness exists as do emotions that are similar to those of humans and communication along human lines but within the constraints of the species. There are some who think that this responsiveness goes beyond the animal kingdom into the plant world, and a few who suggest that it can be observed in more inert objects.

3) If one takes into account that each quark is specialized and part of system within at least one universal system, then yes there is a universal consciousness, but not limited as Ukrkoz seems to suggest to human thought. Not that Ukrkoz is not on a path to understanding but that path seems humancentric so must by nature be limited.

4) Yes. Self-evident if one is not bound by human assessment.

Now, perhaps I misunderstand your quandary, but it seems to me that you are trying to answer whether there is a Universal Law and/or a theistic explanation for how systems operate, whether these systems are closed systems or open, and what the effects of random activities may be on the generalized as well as the specific systems. Perhaps that is much too simple and I've missed the mark. Please expound and correct any misrepresentations as I have not followed this forum before but think this could be an interesting and fun discussion.
Reply With Quote Quick reply to this message

 
Old 02-27-2013, 12:39 PM
 
Location: Kent, Ohio
3,429 posts, read 2,735,118 times
Reputation: 1667
Quote:
Originally Posted by paxquest View Post
1) Electrical engineers/biomedicists can prove that their robots actually develop a form of consciousness and have adaptive learning capabilities.
First, I suggest that you never use the word 'prove' unless you are referring to a mathematical theorem. Science is empirical, and empirical theories are never really "proven" - they are supported or confirmed by data, but the theory itself is, technically, never proven. A theory is simply a good tool to use until a better one comes along.

Now, moving on to the real point: I can safely guarantee you that no credible researcher is claiming to have made a conscious machine. AI has come a long ways, but we still have a long ways to go before we achieve true machine intelligence. Certain machines can excel at specific tasks (what are called "expert systems") such as playing chess or diagnosing disease, but virtually no one is claiming to have achieved true AI. And, yes, artificial neural networks have demonstrated learning, but here again we are still falling short of anything that we would want to call true intelligence. If you want to post a link that proves me wrong about this, I'd be happy to take a look at it. But, more importantly, even if we do create a machine that seems to be genuinely intelligent, we still couldn't say that the machine is actually conscious or sentient.

The term the "Hard Problem" was made famous by David Chalmers in a book entitled The Conscious Mind: In Search of a Fundamental Theory. He called it "the hard problem" because he was trying to distinguish this problem from the merely technical details of engineering. There are no deep metaphysical mysteries about how to create machines that behave as if they are intelligent. We don't know exactly how to do it yet, but we have a generally good idea about how to go about it. Chalmers, tongue-in-cheek, calls these the "easy problems." They are not actually easy (it might be decades or even centuries before we solve them), but we have a general idea for how to go about solving the engineering issues. The problem is that sentience/consciousness appears to more than mere behavior. I can program my laptop to tell me it loves me every morning, but I don't believe that the machine actually feels all warm and fuzzy when it does so. How do we build a machine that actually feels sensations or emotions? That brings us into the realm of the hard problem. Not only do we not know how to do this, we don't even know how to go about trying to find out how to do this. We don't have any credible theories to guide us. We are "metaphysically baffled" according to Chalmers (and I agree with him on this).

Perhaps you are suggesting that behavior itself, if it seems intelligent, necessarily indicates actual sentience? It that is what you are thinking, then we need to have a whole nuther discussion.

Last edited by Gaylenwoof; 02-27-2013 at 12:49 PM..
Reply With Quote Quick reply to this message
 
Old 02-27-2013, 01:52 PM
 
Location: Kent, Ohio
3,429 posts, read 2,735,118 times
Reputation: 1667
Quote:
Originally Posted by paxquest View Post
...it seems to me that you are trying to answer whether there is a Universal Law and/or a theistic explanation for how systems operate, whether these systems are closed systems or open, and what the effects of random activities may be on the generalized as well as the specific systems.
There might be undiscovered natural laws at work that could help us solve the hard problem. In fact, I suspect that there are some key ingredients of this sort missing from science and philosophy at the moment. I do not, however, think that we will need a theistic explanation in order to solve this. We will, however, need to deal with the qualitative aspects of mind/reality in a way that currently goes beyond science as we know it.

I strongly suspect that holism will play a major role. Holism is an inescapable feature of all versions and interpretations of quantum theory that have so far been proposed, but it is a weird aspect of the theory that is not very well understood at this point. One important implication of holism, however, is that we cannot seriously isolate any give component of reality from other components. If we grant that qualia really exist, then I don't think it makes sense to pretend that we really understand physical systems if our theory fails to take qualia into account. Basically what I'm suggesting is that qualia must have measurable physical consequences, even if we do not currently know how to fit them in to our current theories. A sentient organism and a non-sentient organism must be physically different, even if they look identical on the surface and seem to behave identically (in the terminology of Chalmers: Your zombie double cannot be physically identical to you down to the quantum level. The fact that you feel what it is like to see blue must have measurable effects on your body - probably affecting your neural activity - and this activity will probably (it seems to me) deviate from what would be predicted purely on the basis of physical theory as we currently know it. Or to put it another way: I'm predicting that evidence for consciousness will eventually be found in the form of deviations from prediction, once we are able to track the detailed activities of individual neurons in large numbers in a living brain. We are getting close to this. We now have the technology to monitor individual neurons in a living human brain, so it seems that a good test of my prediction might come within the next few decades.

Notice, however, that even if my predictions come true, this won't necessarily imply that qualia are "in the brain." What we need is more than the discovery of correlations between qualia and brain activity. For a true theory, we need to be able to explain why these correlations are what they are. Is the brain "generating" qualia? Or is the brain "filtering" qualia? This is the sort of thing that a theory of mind would address.
Reply With Quote Quick reply to this message
 
Old 02-27-2013, 02:22 PM
 
Location: On the "Left Coast", somewhere in "the Land of Fruits & Nuts"
8,852 posts, read 10,461,442 times
Reputation: 6670
Quote:
Originally Posted by Gaylenwoof View Post
I, too, am attracted to this approach. One famous advocate of this was Aldous Huxley who wrote a fascinating book called "The Doors of Perception." (Yeah, he was inspired to write about this based on his experiences taking mescaline, but hey, I'm totally ok with that This theory is sometimes referred to as the "Mind at Large." I think there is something deeply truthful about this way of thinking, but like every other attempt to formulate a theory of mind, it leaves a crucial piece of the puzzle completely unsolved. We are still left wondering why/how this particular sort of physical system (the brain) filters the "mind." How can we characterize the relationship between neural activity and the feeling of seeing blue in such a way that we are inspired to say: "Oh, yeah! Now I see it! Blah, blah, blah, and that's why we need a visual cortex," or whatever.
Recently viewed DMT: the Spirit Molecule (BTW, also available on NetFlix), so that's very interesting about Huxley, especially since the use of ayahuasca, mescaline, peyote and LSD all basically invoke the same neurotransmitter molecule in the brain, Dimethyltryptamine (DMT). And aside from all the sensory distortions, what all those 'psychedics' really seem to share, and most especially in the case of ayahuasca (which is basically pure DMT), is the common experience of entering some sort of transcendant space that feels like a ''universal consciousness'' underlying everything.

So it's not difficult to see how our respective ''individuated'' human consciousness may be each ''partaking of'' or ''tuning into'' that larger ''universal consciousness'', if there is such a thing. Although unless I'm misunderstanding you, am not seeing the difficulty of accepting that the processed input from the ''meat'' of our immediate sensory organs (sight, hearing, etc.) simply ''mediates'' our localized connection and perceptions, even as the analytical ''computer'' in our brains (aka, the left hemisphere) constantly analyzes and 'explains' it all for us (thru varying degrees of integration with the transcendant ''whole''). Which BTW, speaks to the mystics' notion that our perceptions and control are all mediated by the ego (aka, left brain), basically making our conscious point of view an ''illusion'' (or at best, only a partial view of ''reality'').

Which also leads us to Jung's fascination with communicating with the unconscious or right hemisphere, which arguably may be the part of the brain that's actually ''tuned into'' that ''universal consciousness''. Although the role of the unconscious, which 'speaks' in dreams, symbols, archetypes, etc..... may be that it's actually the part that's really ''in control'', and the conscious left brain simply ''interprets'' all those ''impulses'' for us (which of course is another subject)!

Last edited by mateo45; 02-27-2013 at 02:46 PM.. Reason: links..
Reply With Quote Quick reply to this message
 
Old 02-27-2013, 03:01 PM
 
Location: Kent, Ohio
3,429 posts, read 2,735,118 times
Reputation: 1667
Quote:
Originally Posted by mateo45 View Post
...unless I'm misunderstanding you, am not seeing the difficulty of accepting that the input from our immediate sensory organs (sight, hearing, etc.) simply ''mediates'' our localized connection and perceptions with the transcendant ''whole'', even as the analytical ''computer'' in our brains (aka, the left hemisphere) constantly analyzes and 'explains' it for us.
In science, we start with a problem, then we guess at a solution (i.e., propose a theory) and make predictions based on this theory. Then we do experiments to see if our predictions come true. If our predictions fail, then the theory has to be discarded or altered. In philosophy, we try to do something similar, except that philosohical systems do not necessarily have to be tested empirically. Instead of making empirical predictions, we investigate the logical implications of the theory. If the implications seem reasonable, then the theory survives; if the implications lead to logical contradiction or absurd claims, then the theory doesn't look so good.


A good theory of mind should imply answers to the sorts of questions I listed at the beginning of this thread. Think about how this works with other theories. Evolution implies that we ought to find some mechanism by which characteristics can be passed, with possible variations, from parents to offspring (without such a mechanism, there would be nothing for natural selection to select). In effect, Darwin's theory implied the existence of something like DNA long before DNA were discovered. That's a cool thing for a theory to do. A theory of mind should imply things too. Some of these implications may be surprising at first, but if the theory is good, further investigations should reveal that these implications make sense.

What I think we are missing is a theory that logically implies answers to questions about the mind/brain relationship.
Reply With Quote Quick reply to this message
 
Old 02-27-2013, 03:13 PM
 
93 posts, read 197,071 times
Reputation: 47
See response below to Galenwooof's 3:39 post. I'll have to read your next and mateo's which came in while I sorted through my thoughts, and get back if I have any questions but I thank you first response to my post.


AI can only be based on mathematical algorithms. Strong Ai and neuromorphics are wildly complicated fields, at least to me, with many technologies currently being tested by a few governments as well as private enterprises. I am not an AI/AGI/EE/MD but perhaps you may wish to read international scientists like Ray Kurzwei, Kevin Wariwck, A Nagar, T Robinson, S Massaquoi et al who actually posit and perhaps prove to their colleagues' increasing satisfaction that bioengineering has advanced sufficiently enough to support my feeble statement "...their robots actually develop a form of consciousness and have adaptive learning capabilities." Since I am curious but do not have a rigorous scientifically trained understanding, I'm comfortable with that simple premise even if currently technology has not yet spawned what you term "sentient" machines. Since there are huge ethical hurdles that are associated with that concept, I for one am thankful we're not quite there yet since IME many humans are not quite fully sentient beings. Cyborgs would clearly have the upper hand in many cases, so to speak. (snark)

How to build a bionic man | KurzweilAI (See Link) is just one easily accessible short article, though there are undoubtedly more rigorous papers that those more learned than I can provide.

Chalmers approaches the question from a philosophical perspective that is already somewhat outmoded in terms of "learning" machines being developed by the relatively few neuro-bio/AGI scientists who can actually break down human neuro processes and map them into mechanical functions that have the beginnings of sentience so it seems we do have machines with emerging emotional/sensations capability though they are not yet mass marketed.

May I ask if you are willing to consider other animal forms as sentient? The only reason to again bring this idea up is to challenge the so-called cognitive-revolution of the past 50+ years that is still very humancentric. There are some ethicists who think our current/ more recent body of thought in this arena will be displaced as humans creep forward in understanding of those who share out planet just as Einstein disproved Newton and now even his relativity theories are displaced.

To answer your last question, no behavior is not actual sentience. See my snarky remark. I'd be happy if you shared your thoughts, even if it takes all of us into a whole "nuther" chat.
Reply With Quote Quick reply to this message
 
Old 02-27-2013, 06:42 PM
 
Location: On the "Left Coast", somewhere in "the Land of Fruits & Nuts"
8,852 posts, read 10,461,442 times
Reputation: 6670
BTW, since we're including AI in the discussion, allow me to include Marvin Minsky's idea of the Society of Mind in the mix. Which is basically a model of human intelligence as being built up from the interactions of simple parts he calls "agents", which are themselves mindless, specialized functions, that each work together in invisibly coordinated groups (like the title).

One example he uses is recognizing all the processes and calculations involved in something as simple as picking up a coffee cup and raising it to your mouth... one "agent" judges the distance, another the weight of the object, a third keeps it level, and on and on... all working together harmoniously and in the proper sequence. Minsky (BTW, who also ran the AI department at MIT for many years) suggests that even the phenomenon of "consciousness" is basically an illusion (or an emergent quality at best) that can be similarly broken down into various "agents".

Last edited by mateo45; 02-27-2013 at 06:52 PM.. Reason: link..
Reply With Quote Quick reply to this message
 
Old 02-27-2013, 09:40 PM
 
Location: Kent, Ohio
3,429 posts, read 2,735,118 times
Reputation: 1667
Quote:
Originally Posted by paxquest View Post
May I ask if you are willing to consider other animal forms as sentient?
Yes, I believe that other animals are sentient (which is to say, they experience sensations and emotions, or in other words, "there is something it is like to be" a dog, cat, squirrel, etc.). Since I don't have the sort of theory I am looking for, I can't do anything but guess about the possible sentience of flies, worms, plants, or bacteria. I would guess "no" for these sorts of organisms, even though flies and worms might be said to display some levels of intelligence. Later I will try to say more about self-awareness and the prospects for modeling consciousness using cellular automata or neural networks.

BTW, for my purposes, I use the term 'sentient' to get at the idea of experiencing qualia, whereas I think of 'consciousness' as sentience plus some level of cognitive intelligence and self awareness. This is a common sort of distinction, but I wouldn't claim it's necessarily the one and only "correct" way. A lot of people use 'sentience' and 'consciousness' as interchangeable terms.

Concerning AI: I am familiar with a lot of recent work in AI and neural networks (I've actually taken two grad classes on the subject of neural nets), but there is also a lot that I don't know. I've read The Age of Spiritual Machines, by Ray Kurzweil, (and I've read some of Marvin Minsky) but I'm not familiar with the others you mentioned. I will have more to say about AI later, when I have some more time.
Reply With Quote Quick reply to this message
 
Old 02-28-2013, 08:50 AM
 
3,448 posts, read 3,134,063 times
Reputation: 478
..its a common ground association-recognition.....in a mutual setting-experience through time space and gravitational influence. ( order- disorder.... survival and association. So man meets whatever, color, sound, person....and then catalogs its persevering reality in existence by an associating relative recognizing sensitivity in the brain So with the collapsing wave and the double slit experiment, the measure from the consciousness or the clock establishes time.....relative to the particle...relative to our time due to a gravitational flux in time space caused by the measuring....there is a time issue going on and a perception issue. Were all in motion...along with an ever present force-gravity. Time relative to the particle shuts off and on, relative to our time. Gravity-density interference...same as us...but we don't notice due to our experience in time, consciousness-photosynthesis-mass-light-constant...we have properties of both light and mass, onto motion>focus and voila.

This in opinion is a gravity issue. So all the different colors or ideas, a chair, whatever can be identified with, relative to their survival in existence.

Lets take a person with absolute pitch. Each note has a different personality or we could say quaila. E is mellow and soft..F the opposing extreme harsh and abrupt. A system and then all others fitting in with their idea, or style in time space and gravity. Just like people, color, organization of a chair. Star systems another issue. Its not the vibration and wave all together, I know this because I have better then absolute pitch and can discern even if out of tune up to 1/2 a tone. So for example thats a D but its really a Dflat trying to be a D because its out of tune by 1/2 a semi tone. The feeling I get as its said in the quaila experience would be an association in the sounds complacency or resting existence in its manner in existence, survival.

In this its almost impossible to not reduce to basic survival , identification in our setting , sensitivity and then an association in the complexity-appropiate filing capability by association in the brain. So when we see the color blue...the experience is simply refreshing and reaffirming what is already in memory from the first experience seeing the blue and the mutual common ground material understandings in time space and gravity.

Its fun to get the opinion out, I write quickly and have enjoyed reading along. Looking forward to the thinking and feel free to pass over. I wasn't going to comment until much later but the OP was getting very interesting there and got a little inspired with my kick at the cat. Anyway...lots of idea's and so it goes. I like to be patient on these things and have joining ideas with possible suggestions for experiment...so prob will not open this page up for about 5 days or so.

Last edited by stargazzer; 02-28-2013 at 09:38 AM..
Reply With Quote Quick reply to this message
 
Old 02-28-2013, 10:20 AM
 
Location: Kent, Ohio
3,429 posts, read 2,735,118 times
Reputation: 1667
Quote:
Originally Posted by mateo45 View Post
Minsky...suggests that even the phenomenon of "consciousness" is basically an illusion (or an emergent quality at best) that can be similarly broken down into various "agents".
For Minsky, the idea of an "agent" is that it is a specialist. Like, for example, you might hire a travel agent or a financial agent to take care of these aspects of your life. People often think of Minsky's agents as simple things (sub-programs) that collectively compose a complex thing (the conscious mind), but Minsky himself was trying to avoid this. For him, an agent is a "black box." Your travel agent is not necessarily "simple." The point is that you don't know, or don't generally care to know, the details of how your agents accomplish the tasks they are given.


I really like Minsky's approach to all of this, and I think that a conscious being can, in fact, be usefully thought of as a collection of agents and agencies. This is all really cool stuff, but unfortunately it doesn't really get at the hard problem. Minsky is focusing on how to analyze intelligent behavior and someday build machines that can behave in intelligent ways. None of this addresses the notion of qualia - the feeling of being a being who experiences the world as a world; red as red, pain as being sharp or dull, or throbbing, etc. We can program a machine to distinguish between colors, but this behavioral ability does not logically imply that the machine experiences colors. We can program a machine to say "ouch" when we bang on its keyboard too hard, but this doesn't mean it experiences discomfort when you bang on its keyboard. Minsky's agents are "black boxes" - which means they might or might not experience qualia. The concept of "agent" simply does not imply anything about the experience, or non-experience, of qualia. Minsky is thus offering an interesting theory of intelligence, but he's not really offering a theory of mind capable of solving the hard problem.

I wish I could give you an example of a theory of mind that seems to solve the hard problem. Then I could say: "See, this is roughly what a theory of mind should look like." Then I could analyze various aspects of the theory and say: "This theory is wrong because it implies X, but X is obviously implausible," or, "X is logically self-contradictory," etc. But I can't offer you such a theory because we don't have any, and most philosophers would say that we don't even have an intuitive sense of what a theory of this sort might look like. This stands in contrast to, say, a theory for the building of intelligent machines. We can't build intelligent machines right now because we don't have a theory for the general "common sense" sort of intelligence that an intelligent machine needs to exhibit. But we do, however, have a general intuitive sense of how to go about creating intelligent machines. We have this feeling (right or wrong) that we can keep building more and more complex machines that are capable of doing more and more things until one day we can say, with reasonable confidence, that we have built a truly intelligent machine.

Some people want to say that there is no "hard problem" because qualia will just automatically come along with the increasing complexity of the machines that we build. That's fine; I even suspect that this may be true. But it misses the whole point of the hard problem. Sure, you can SAY that qualia will just come along for the ride, but simply saying this doesn't actually explain anything. It also doesn't help us predict much of anything. The problem is this: WHY should increasing complexity give rise to qualia? Personally, I suspect that something more than mere complexity is needed. Maybe we need something like Minsky's agents to produce qualia, but if this is the case, then we need to break into the black box to see exactly how it is that an agent produces qualia. Simply saying "the agent does it" does not really get at the core of the problem.
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Philosophy
Similar Threads

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top