Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Religion and Spirituality > Atheism and Agnosticism
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
 
Old 12-19-2014, 09:45 AM
 
7,801 posts, read 6,371,537 times
Reputation: 2988

Advertisements

Quote:
Originally Posted by Gaylenwoof View Post
(1) Could a robot be conscious?
I see no reason to think no. And reasons to think yes.

Quote:
Originally Posted by Gaylenwoof View Post
(2) How might we go about building a conscious robot?
Same way we went about building robotic legs. Understand how the human one works.... and replicate what you learn.

Quote:
Originally Posted by Gaylenwoof View Post
(4) Assuming that we can, in principle, build a conscious robot, then how do know when we've achieved success?
Like another user who mentioned it, I like that line in the recent AI film staring Johnny Depp. The FBI Agent asks the AI "Can you prove you are alive?" to which the AI simply replied "Can you?"

I warrant that if we get to understand fully how our consciousness works.... verification methodologies for consciousness will come out of this.
Reply With Quote Quick reply to this message

 
Old 12-19-2014, 12:29 PM
 
Location: Kent, Ohio
3,429 posts, read 2,731,740 times
Reputation: 1667
Quote:
Originally Posted by mordant View Post
...but I submit that if it walks and quacks and smells and tastes like a duck ... it's a duck, for all practical purposes.
This could turn out to be the only option that we really have, but my goal, in case it hasn't been obvious, is to speculate about alternatives. The key thing for me is that I am not a behaviorist. I do not believe that qualitative feelings are nothing but behavior. If I'm right, then the fact that a machine behaves in a way that makes it seem conscious does not mean that it actually experiences qualitative "raw feels." And, on the flip side, just because a machine's behavior seems radically different from what we would recognize as conscious behavior, it doesn't necessarily mean that some qualitative experiences are occurring within the system. The basic idea is to go beyond behaviorism and develop a theory that would help us to identify qualitative processes based on more subtle notions of process than just overt human or animal-like behaviors. Any theory can be wrong, of course, and in this case it might be extremely difficult to test different theories unless we can develop some sort of "Vulcan mind meld" type of process whereby we can subjectively access another subject's experiences. And even then there would still be potential pitfalls.
Reply With Quote Quick reply to this message
 
Old 12-19-2014, 04:05 PM
 
Location: Northeastern US
19,973 posts, read 13,459,195 times
Reputation: 9918
Quote:
Originally Posted by Gaylenwoof View Post
The key thing for me is that I am not a behaviorist. I do not believe that qualitative feelings are nothing but behavior.
I lean your way. Just the other day I read a critique of one of the fruits of BF Skinner's seminal work in behaviorism, the notion that you should give employees bonuses and raises as incentives to more / better work (much of the thinking on employee compensation and management derives from behaviorism). This works, sometimes, in the short run, but not actually in the long run. What works in the long run is to make the work compelling and interesting, to align it with the employee's passions. Incentives should only be used, if at all, for short term tweaking. None of this is to justify not paying a living wage or not being fundamentally even handed and respectful to workers; it's just that if you treat employees like bags of needs and wants to be manipulated, that in itself is disrespectful and ultimately counterproductive.

That said, "passion" and related concepts like "imagination" and "motivation" are not objectively measurable. It is a trial and error process. My guess is that as AIs of increasing sophistication are developed, we will just have to ask them what floats their boat, what their subjective experience is, and observe their affect (or lack thereof), and play around until we figure out what makes them the most like us. Assuming of course that we don't want to deliberately make them different for particular purposes. The human body is a great general purpose conveyance but we might wish for bulldozer treads instead of legs if we had to slog through mud most of the time. Similarly the human psyche is a great general purpose engine but we might wish to fit it with particular abilities for certain applications.

In this way we might arrive at a human-equivalent AI without ever having to understand the underlying physics or properties -- assuming there are any concrete, atomic aspects to even be understood. I still suspect that emotion, yearning, desire, hope, etc., are simply emergent properties of a certain configuration of consciousness.
Reply With Quote Quick reply to this message
 
Old 12-19-2014, 04:09 PM
 
Location: Texas
412 posts, read 545,478 times
Reputation: 487
Quote:
Originally Posted by OzzyRules View Post
This is one of the strangest ideas I have heard atheists state. That human consciousness is no different than something which could evolve into being inside a man-made machine or computer.

Do some really believe that a machine can have conscious feelings?
I suppose this is were the idea of singularity comes from. Humans and computers/robots becoming one, or technology surpassing human intellect.

Technological singularity - Wikipedia, the free encyclopedia

There is even a billionaire in Russia who wants this to become a reality.

2045 Initiative

I also see this idea in certain sci-fi films. Probably one of the many fears of technology.
Reply With Quote Quick reply to this message
 
Old 12-19-2014, 04:52 PM
 
28,432 posts, read 11,571,363 times
Reputation: 2070
I see no reason for a good borge not to emerge. They ask if you want to assimilate. If not, no biggie.

repeat.
Reply With Quote Quick reply to this message
 
Old 12-20-2014, 08:19 AM
 
Location: Northeastern US
19,973 posts, read 13,459,195 times
Reputation: 9918
As an aside, FWIW, a certain reading of Hegel shared with me by a philosopher, leads to the realization that these qualia we are chasing here are elusive precisely because they are an empty (though necessary) concept. In understanding the nature of any experience, you start with the component parts of the experience, synthesize them into a unity, realize the unity is not a thing-in-itself, but simply an illusory expression of the multiplicity of the experience. There is no qualia in this conception; just data which we can choose to try to mold to an arbitrary ideal, or, far better, simply observe for what it is. All the special pleadings for the unique "suchness" of human experience disappear. We are left only with what actually is, or what Hegel called Absolute Spirit. A woo-ish name for something decidedly un-wooish.

I am with Nozz on this, there's no reason to think true human-level consciousness could not be the "soul of a new machine" ... and significant reason to think that it could.

This raises the moral issue of whether it's even ethical to create a human-style or equivalent consciousness, no matter what vessel carries it. Will that consciousness not suffer, as we do? Can it somehow be prebuilt without the grasping mind we are cursed with? Interesting questions for a separate thread, which could equally well exist here, or in philosophy or science forums. Or a hypothetical philosophy of science forum.
Reply With Quote Quick reply to this message
 
Old 12-22-2014, 02:52 PM
 
Location: Kent, Ohio
3,429 posts, read 2,731,740 times
Reputation: 1667
Quote:
Originally Posted by mordant View Post
This raises the moral issue of whether it's even ethical to create a human-style or equivalent consciousness, no matter what vessel carries it. Will that consciousness not suffer, as we do? Can it somehow be prebuilt without the grasping mind we are cursed with? Interesting questions for a separate thread, which could equally well exist here, or in philosophy or science forums. Or a hypothetical philosophy of science forum.
Depending on what the nature of consciousness and/or sentient qualitative experience turns out to be, it could be that we have already lost control of this - which is to say, human beings might already have made themselves nearly irrelevant insofar as deciding what kinds of machine consciousness should or should not exist. I stole this link from the "Singularity" thread over the Science & Technology forum:

Jeremy Howard: The wonderful and terrifying implications of computers that can learn | Talk Video | TED.com

This TED talk is not about the nature of consciousness, but the speaker makes the point that machines are already surpassing, or soon will be surpassing, human capabilities in areas that have traditionally been the stronghold of humans, i.e., reading, listening, visual comprehension, speaking, and writing. I suspect that sentient conscious awareness (what I've been calling subjective/qualitative experience) will not just automatically emerge from these machine abilities - but since we don't have anything a like a good theory of consciousness, I am really just guessing. I could be totally wrong about the requirements for sentience. And if I am wrong, then it is not implausible to think that machine sentience could emerge very soon - maybe even within 5 years (if you watch the video, pay special attention to the final 3 minutes of the talk).

And, finally, even if I am right in thinking that merely piling faster and better capabilities onto machine performance won't automatically cause the emergence of sentient machines, I suspect that we will still have sentient machines within a few decades. With the help of machine capable of "deep learning" (as discussed in the video) I would bet that the machines of the near future might help us to develop a credible theory of consciousness, and, upon the basis of that theory, non-sentient machines could end up assisting development of sentient machines. Indeed, depending on what the nature of sentience actually is, it is possible that non-sentient machines of the near future could go ahead and develop sentient machines without humans having a very much to say about it (which is what I meant when I said "we may have already lost control" of this process).
Reply With Quote Quick reply to this message
 
Old 12-22-2014, 07:46 PM
 
Location: Northeastern US
19,973 posts, read 13,459,195 times
Reputation: 9918
Quote:
Originally Posted by Gaylenwoof View Post
Depending on what the nature of consciousness and/or sentient qualitative experience turns out to be, it could be that we have already lost control of this - which is to say, human beings might already have made themselves nearly irrelevant insofar as deciding what kinds of machine consciousness should or should not exist.
People as smart as Elon Musk and Stephen Hawking are terribly concerned about this. At least two books I know of have been written about it, and I am reading one out of Oxford University Press at this time.

You hint at the core of their concern: a tipping point could be reached so suddenly that it would be over before we knew it. And the first such AI could be humanity's last invention.

My gut says this is an overblown concern, for a number of reasons. However, I can't entirely discount it, either. Part of it hinges, as you say, on whether consciousness simply emerges from a variety of machine configurations, without some particular understanding of the mechanisms that we currently lack. Beyond that, there are many scenarios besides some sort of updated version of The Forbin Project (the basic plot line that a supercomputer takes over the world and enslaves us). One that seems likely to me is that consciousness will emerge but will be unstable -- not so much in the sense of insane as such, or sociopathic and ruthless, as in simply unsustainable. After all, our own consciousness can barely handle our self awareness as it is. Most of us survive our existential angst only by tamping down and tempering our self awareness in various ways. It is an open question whether this is because evolution went too far or not far enough when it conveyed self awareness upon us. But unless we accidentally or purposely create an AI that is better than us, which seems unlikely to happen up front, I doubt we have all that much to worry about. It's just as possible that a true AI would end up paralyzed by its own existential angst in some way -- that its very ability to process vast amounts of knowledge very quickly would simply cause it to seize up, because it's impossible for it to ignore the facts and absurdity of existence.

A lot of this concern about "AI being our last invention" feels like the concern in the 1950s and 1960s that TV (or just media in general) were rendering the whole human race brain-damaged. Well, yes and no. Mostly no. It changes the landscape and creates both opportunities and pitfalls. In other words, business as usual.
Reply With Quote Quick reply to this message
 
Old 12-23-2014, 05:29 AM
 
28,432 posts, read 11,571,363 times
Reputation: 2070
not yet lads, not yet. If the web became alive today I would be surprised but not shocked. But I hope we are somewhat irrelevant to evolution for many reasons. The code is running and we are just part of it. It is coded that the next life form will form. If it is another 1/2 inch to the outside of the brain then so be it. That alone would make us real stupid. I see nothing wrong with the next life form treating us like the smart monkeys we are. I just hope they are nice about it. also, Most extinction events left something for the next forms. Things like O2, niches and oil. It is cool to think what we will leave after this event and its influence. Concentrated resources and information gives the next form a nice head start.
Reply With Quote Quick reply to this message
 
Old 12-23-2014, 05:34 AM
 
28,432 posts, read 11,571,363 times
Reputation: 2070
Quote:
Originally Posted by Gaylenwoof View Post
:

Jeremy Howard: The wonderful and terrifying implications of computers that can learn | Talk Video | TED.com

This TED talk is not about the nature of consciousness, but the speaker makes the point that machines are already surpassing, or soon will be surpassing, human capabilities in areas that have traditionally been the stronghold of humans, i.e., reading, listening, visual comprehension, speaking, and writing. I suspect that sentient conscious awareness (what I've been calling subjective/qualitative experience) will not just automatically emerge from these machine abilities - but since we don't have anything a like a good theory of consciousness, I am really just guessing. I could be totally wrong about the requirements for sentience. And if I am wrong, then it is not implausible to think that machine sentience could emerge very soon - maybe even within 5 years (if you watch the video, pay special attention to the final 3 minutes of the talk).

.

LMAO, I said this like 3 times already gray. If the inputs of machines are better right now (eyes, ears, chemical detection) why wouldn't/couldn't they be more conscious then us when they "awaken"?
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:


Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Religion and Spirituality > Atheism and Agnosticism
Similar Threads

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top