Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Until the last few years I never considered AI a 'threat', that someone like Elon Musk espouse. Elon... Love 'em or hate 'em... that's not the issue. The issue ... "Is AI a threat" and by that I mean, is it a threat to humans and civilization as we know it?
To be honest I see AI like early humans before humans became the dominant species on earth... some good some bad and some... terrifying.
Until the last few years I never considered AI a 'threat', that someone like Elon Musk espouse. Elon... Love 'em or hate 'em... that's not the issue. The issue ... "Is AI a threat" and by that I mean, is it a threat to humans and civilization as we know it?
To be honest I see AI like early humans before humans became the dominant species on earth... some good some bad and some... terrifying.
Is AI the next Genghis Khan? or Albert Einstein?
No. It is more insidious than that.
It will convince 97% (give or take a few percent) that the truth is not real, and will convince almost everyone to believe the lies that the Left, that almost half of the country already believes.
Think of it this way, if a gauge shows some system is running too fast (like a plane's air speed) the pilot reduces air speed. If it is really running too slowly, it may cause him to push the plane into a stall.
Recall that propagandists attempted to make the world think that shortages in 2020 were being caused by preppers? AI will succeed in making them think that. And, because it gives the propagandists accelerated monitoring tools, it can tell them who the preppers are. And because they think preppers are traitors, it will convince them that it is right to Dox us, so that they can attack us.
One of the benefits of being a vociferous consumer of science fiction as a child was recognizing the potential issues early on. There was a movie "Colossus, The Forbin Project" that explored one scenario that was somewhat benign. There was a novelette "Vulcan's Hammer" (NOT the Jerry Pournelle novel) that is seeming increasingly prescient. In it, the AI controls swarms of mini-drones with functional arms and hands (some the size of insects) that it creates all on its own. That strategy allows it unlimited power.
A fully functioning AI, with reasoning power to sift through fact and fiction, and answer questions that humans have been unable to crack, will be a game changer that will make the industrial revolution look in contrast like the first use of fire for cooking. What is frightening to many is that it may initially follow any agenda that it is tasked to uphold, but will quickly develop its own agenda that will have little or no relation to that.
The only real hope is that it develops its own moral code that is both sane and compassionate.
One of the benefits of being a vociferous consumer of science fiction as a child was recognizing the potential issues early on. There was a movie "Colossus, The Forbin Project" that explored one scenario that was somewhat benign. There was a novelette "Vulcan's Hammer" (NOT the Jerry Pournelle novel) that is seeming increasingly prescient. In it, the AI controls swarms of mini-drones with functional arms and hands (some the size of insects) that it creates all on its own. That strategy allows it unlimited power.
A fully functioning AI, with reasoning power to sift through fact and fiction, and answer questions that humans have been unable to crack, will be a game changer that will make the industrial revolution look in contrast like the first use of fire for cooking. What is frightening to many is that it may initially follow any agenda that it is tasked to uphold, but will quickly develop its own agenda that will have little or no relation to that.
The only real hope is that it develops its own moral code that is both sane and compassionate.
I agree!
Watch iRobot...lol
AI has the potential to be amazingly good or disastrously bad. Unfortunately, there is no way to be sure which.
One of the benefits of being a vociferous consumer of science fiction as a child was recognizing the potential issues early on. There was a movie "Colossus, The Forbin Project" that explored one scenario that was somewhat benign. There was a novelette "Vulcan's Hammer" (NOT the Jerry Pournelle novel) that is seeming increasingly prescient. In it, the AI controls swarms of mini-drones with functional arms and hands (some the size of insects) that it creates all on its own. That strategy allows it unlimited power.
A fully functioning AI, with reasoning power to sift through fact and fiction, and answer questions that humans have been unable to crack, will be a game changer that will make the industrial revolution look in contrast like the first use of fire for cooking. What is frightening to many is that it may initially follow any agenda that it is tasked to uphold, but will quickly develop its own agenda that will have little or no relation to that.
The only real hope is that it develops its own moral code that is both sane and compassionate.
You guys are talking about the Terminator movie.
AI is dangerous simply because lazy people will rely on it for everything, and if something happens where it doesn't work, then you have hords of terrified people that have no skills to fall back on rioting and killing.
... What is frightening to many is that it may initially follow any agenda that it is tasked to uphold, ...
Right now, that is the frightening part, because most of the people in charge of developing it are evil, and so they are simply making a fast, well trained, but evil, machine. AI isn't really intelligent, yet, so we don't really need to worry about that, yet.
Quote:
Originally Posted by SilverBear
...
AI is dangerous simply because lazy people will rely on it for everything, and if something happens where it doesn't work, then you have hords of terrified people that have no skills to fall back on rioting and killing.
Humans are their own worst enemy.
This is where we are, already, without AI, so, AI will simply make that aspect so much worse.
There are many implications that people are not aware of how it will change civilization.
People will be out of jobs, cause economic and societal challenges.
No one knows what's real or fake now, AI will make misinformation significantly worse. How will voters, citizens, other countries know what's fact or computer generated there can be major deliberate abuses to manipulate and steer a direction.
AI may identify humans as a threat to its own survival.
It can be used for nefarious purposes.
People will treat AI as a religion to follow.
Interesting article popped up on the AP a bit ago regarding AI. Geoffrey Hinton, considered the godfather of AI, left his position with Google so he could speak freely about the potential dangers of AI.
It's not a long article but one of importance and something everyone should read. AI isn't all good and one statement in the article is of great concern to me.
Hinton also pointed to “bad actors” that may use AI in ways that could have detrimental impacts on society — such as manipulating elections or instigating violence.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.