Does AI recognize and practice morality? (power, advance, computer, error)
Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
If yes. where does it derive its moral code from? And how can it be applied to various geographical cultures and societies where morality has variations?
I recently heard that after analyzing human behavior, AI has learned to speak a lie. How does it going to grow and how will it effect its future impact on humans?
Current language models are just based on existing texts. During training, it's possible to pick and choose the content included in the training set. They can then lie for various reasons, such as mimicking characters in fictional books lying in a given situation.
If yes. where does it derive its moral code from? And how can it be applied to various geographical cultures and societies where morality has variations?
I recently heard that after analyzing human behavior, AI has learned to speak a lie. How does it going to grow and how will it effect its future impact on humans?
The problem is AI is not one single system. There are different tools to do different jobs. The question you are asking is relevant to neural networks, systems built to mimic how the human brains work. The neural networks I build and train are small, single systems, so they are not able to build a model of the world. A larger set of integrated networks in theory could, so it would need to learn how to treat objects in that world, a system of morality.
But what would a neural network apply this moral code to? People normally apply it to other people, with many applying it to animals, but a neural network is not a human. It could learn it's own system of morality like we did, but apply it only to other neural networks. It may not even understand what our geographical cultures and societies are.
We would have to teach a neural network how to be moral to humans by feeding in the relevant data, but this would be difficult when the data is complex. An artificial neural network may learn something from it's input data that we did not recognize or know was there.
I do believe AIs can understand morality in a legalistic sense. However since they don't have emotions I don't think they could understand morality like we do, or extrapolate what they have learned to new situations without seeming off (at best).
Morality is subjective. In Japan, it's legal and honorable to commit suicide, in other countries, not so much. In Amsterdam, it's legal to be a sex worker and smoke weed in coffee shops, in other countries, not so much.
Morality is subjective. In Japan, it's legal and honorable to commit suicide, in other countries, not so much. In Amsterdam, it's legal to be a sex worker and smoke weed in coffee shops, in other countries, not so much.
Human life is not sacred in the eyes of AI.
morality and legality are different.
Slavery was legal for a time but that did not equate to it being morally justified
"As an AI language model, I am not capable of lying as I do not have personal beliefs, intentions, or motivations. However, AI systems designed for certain tasks, such as chatbots, may be programmed to mimic lying or deception by providing responses that are intentionally false or misleading.”
So, how AI is going to be different than other lying and misleading Internet sources???
It seems that the bias, fake news, misleading interpretations and false search results will stay, just now automated. There won't be people behind all that mess made accountable. It will be AI, the computer, and not responsible for their hacks, glitches and typical errors.
Morality is subjective. In Japan, it's legal and honorable to commit suicide, in other countries, not so much. In Amsterdam, it's legal to be a sex worker and smoke weed in coffee shops, in other countries, not so much.
Human life is not sacred in the eyes of AI.
Quote:
Originally Posted by usayit
morality and legality are different.
Slavery was legal for a time but that did not equate to it being morally justified
You are both half right.
Morality must be based on a system of values.
The foundation of those values must be beyond even the society's, ability to change.
If the society can change those values, then we are discussing legality.
...
So, how AI is going to be different than other lying and misleading Internet sources???
It seems that the bias, fake news, misleading interpretations and false search results will stay, just now automated. There won't be people behind all that mess made accountable. It will be AI, the computer, and not responsible for their hacks, glitches and typical errors.
Funny world we live in...
Well, you are spot on about misleading interpretations and false search results being [more] automated.
Here is the answer, if humans are willing to see it:
Hold the person in control of the AI accountable.
We have already lost half the battle, by not holding those running a corporation personally accountable for the behavior of the corporation (as a rule, they fine the corporation, they don't imprison the board of directors and the executives). And I know of an instance where someone was denied a promotion due to a "computer error" and those running the computer, along with those who "owned" the computer claimed no responsibility, because, "it was a computer error."
One thing I have yet to see:
If a driverless car runs over someone and kills them, who do we hold accountable (put on trial) for manslaughter?
Here is the answer, if humans are willing to see it:
Hold the person in control of the AI accountable.
We have already lost half the battle, by not holding those running a corporation personally accountable for the behavior of the corporation...
But it seems as if that will be even more difficult with AI.
Whereas AI researchers once spoke of "designing" AIs, they now speak of "steering" them. And even our ability to steer is slipping out of our grasp as we let AIs teach themselves and increasingly act in ways that even their creators do not fully understand. In advanced artificial neural networks, we understand the inputs that go into the system, but the output emerges from a "black box" with a decision-making process largely indecipherable to humans.
Skeptics of AI risks often ask, "Couldn’t we just turn the AI off?" There are a variety of practical challenges here. The AI could be under the control of a different nation or a bad actor. Or AIs could be integrated into vital infrastructure, like power grids or the internet. When embedded into these critical systems, the cost of disabling them may prove too high for us to accept since we would become dependent on them. AIs could become embedded in our world in ways that we can’t easily reverse.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.