Welcome to City-Data.com Forum!
U.S. CitiesCity-Data Forum Index
Go Back   City-Data Forum > General Forums > Science and Technology > AI
 [Register]
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
View detailed profile (Advanced) or search
site with Google Custom Search

Search Forums  (Advanced)
Reply Start New Thread
 
Old 08-13-2023, 06:07 AM
 
6,115 posts, read 3,083,547 times
Reputation: 2410

Advertisements

If yes. where does it derive its moral code from? And how can it be applied to various geographical cultures and societies where morality has variations?

I recently heard that after analyzing human behavior, AI has learned to speak a lie. How does it going to grow and how will it effect its future impact on humans?
Reply With Quote Quick reply to this message

 
Old 08-13-2023, 07:11 PM
 
37 posts, read 34,439 times
Reputation: 132
Current language models are just based on existing texts. During training, it's possible to pick and choose the content included in the training set. They can then lie for various reasons, such as mimicking characters in fictional books lying in a given situation.
Reply With Quote Quick reply to this message
 
Old 08-14-2023, 10:38 AM
 
Location: Germany
16,758 posts, read 4,968,659 times
Reputation: 2110
Quote:
Originally Posted by GoCardinals View Post
If yes. where does it derive its moral code from? And how can it be applied to various geographical cultures and societies where morality has variations?

I recently heard that after analyzing human behavior, AI has learned to speak a lie. How does it going to grow and how will it effect its future impact on humans?
The problem is AI is not one single system. There are different tools to do different jobs. The question you are asking is relevant to neural networks, systems built to mimic how the human brains work. The neural networks I build and train are small, single systems, so they are not able to build a model of the world. A larger set of integrated networks in theory could, so it would need to learn how to treat objects in that world, a system of morality.

But what would a neural network apply this moral code to? People normally apply it to other people, with many applying it to animals, but a neural network is not a human. It could learn it's own system of morality like we did, but apply it only to other neural networks. It may not even understand what our geographical cultures and societies are.

We would have to teach a neural network how to be moral to humans by feeding in the relevant data, but this would be difficult when the data is complex. An artificial neural network may learn something from it's input data that we did not recognize or know was there.
Reply With Quote Quick reply to this message
 
Old 08-23-2023, 09:04 PM
 
5,527 posts, read 3,247,667 times
Reputation: 7763
I do believe AIs can understand morality in a legalistic sense. However since they don't have emotions I don't think they could understand morality like we do, or extrapolate what they have learned to new situations without seeming off (at best).
Reply With Quote Quick reply to this message
 
Old 08-25-2023, 12:37 AM
 
Location: Honolulu, HI
24,598 posts, read 9,437,319 times
Reputation: 22935
Morality is subjective. In Japan, it's legal and honorable to commit suicide, in other countries, not so much. In Amsterdam, it's legal to be a sex worker and smoke weed in coffee shops, in other countries, not so much.

Human life is not sacred in the eyes of AI.
Reply With Quote Quick reply to this message
 
Old 08-30-2023, 11:53 AM
 
Location: NNJ
15,070 posts, read 10,089,802 times
Reputation: 17247
Quote:
Originally Posted by Rocko20 View Post
Morality is subjective. In Japan, it's legal and honorable to commit suicide, in other countries, not so much. In Amsterdam, it's legal to be a sex worker and smoke weed in coffee shops, in other countries, not so much.

Human life is not sacred in the eyes of AI.
morality and legality are different.

Slavery was legal for a time but that did not equate to it being morally justified
Reply With Quote Quick reply to this message
 
Old 08-30-2023, 02:08 PM
 
Location: Tricity, PL
61,647 posts, read 87,001,838 times
Reputation: 131594
Well, that's an interesting statement:

"As an AI language model, I am not capable of lying as I do not have personal beliefs, intentions, or motivations. However, AI systems designed for certain tasks, such as chatbots, may be programmed to mimic lying or deception by providing responses that are intentionally false or misleading.”

https://www.bloomberg.com/opinion/ar...itating-humans

So, how AI is going to be different than other lying and misleading Internet sources???
It seems that the bias, fake news, misleading interpretations and false search results will stay, just now automated. There won't be people behind all that mess made accountable. It will be AI, the computer, and not responsible for their hacks, glitches and typical errors.

Funny world we live in...
Reply With Quote Quick reply to this message
 
Old 09-02-2023, 11:22 AM
 
Location: SE corner of the Ozark Redoubt
8,924 posts, read 4,632,086 times
Reputation: 9226
Quote:
Originally Posted by Rocko20 View Post
Morality is subjective. In Japan, it's legal and honorable to commit suicide, in other countries, not so much. In Amsterdam, it's legal to be a sex worker and smoke weed in coffee shops, in other countries, not so much.

Human life is not sacred in the eyes of AI.
Quote:
Originally Posted by usayit View Post
morality and legality are different.

Slavery was legal for a time but that did not equate to it being morally justified
You are both half right.

Morality must be based on a system of values.

The foundation of those values must be beyond even the society's, ability to change.

If the society can change those values, then we are discussing legality.
Reply With Quote Quick reply to this message
 
Old 09-02-2023, 11:32 AM
 
Location: SE corner of the Ozark Redoubt
8,924 posts, read 4,632,086 times
Reputation: 9226
Quote:
Originally Posted by elnina View Post
...
So, how AI is going to be different than other lying and misleading Internet sources???
It seems that the bias, fake news, misleading interpretations and false search results will stay, just now automated. There won't be people behind all that mess made accountable. It will be AI, the computer, and not responsible for their hacks, glitches and typical errors.

Funny world we live in...
Well, you are spot on about misleading interpretations and false search results being [more] automated.

Here is the answer, if humans are willing to see it:
Hold the person in control of the AI accountable.

We have already lost half the battle, by not holding those running a corporation personally accountable for the behavior of the corporation (as a rule, they fine the corporation, they don't imprison the board of directors and the executives). And I know of an instance where someone was denied a promotion due to a "computer error" and those running the computer, along with those who "owned" the computer claimed no responsibility, because, "it was a computer error."

One thing I have yet to see:
If a driverless car runs over someone and kills them, who do we hold accountable (put on trial) for manslaughter?
Reply With Quote Quick reply to this message
 
Old 09-03-2023, 12:48 PM
 
Location: So Ca
26,717 posts, read 26,776,017 times
Reputation: 24775
Quote:
Originally Posted by TRex2 View Post
Here is the answer, if humans are willing to see it:
Hold the person in control of the AI accountable.

We have already lost half the battle, by not holding those running a corporation personally accountable for the behavior of the corporation...
But it seems as if that will be even more difficult with AI.

Whereas AI researchers once spoke of "designing" AIs, they now speak of "steering" them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a "black box" with a decision-making process largely indecipherable to humans.

Skeptics of AI risks often ask, "Couldn’t we just turn the AI off?" There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse.


The Darwinian Argument for Worrying About AI:
https://time.com/6283958/darwinian-a...ying-about-ai/
Reply With Quote Quick reply to this message
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.

Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.


Reply
Please update this thread with any new information or opinions. This open thread is still read by thousands of people, so we encourage all additional points of view.

Quick Reply
Message:

Over $104,000 in prizes was already given out to active posters on our forum and additional giveaways are planned!

Go Back   City-Data Forum > General Forums > Science and Technology > AI
Similar Threads

All times are GMT -6.

© 2005-2024, Advameg, Inc. · Please obey Forum Rules · Terms of Use and Privacy Policy · Bug Bounty

City-Data.com - Contact Us - Archive 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37 - Top