Join John Adams, world renowned Intl Matchmaker, Monday nights 8:30 EST for Live Webcasts!
And check out Five Reasons why you should attend a FREE AFA Seminar! See locations and dates here.
View Active Topics View Your Posts Latest 100 Topics FAQ Topics Mobile Friendly Theme
Discuss deep philosophical topics and questions.
6 posts • Page 1 of 1
At first it may seem silly, but then when considered altogether, the multitude of sources make it clear: Do not attempt to play God.
From Dune, to Terminator, to The Matrix, to Battlestar Galactica, the message is clear: if mankind attempts to play creator God by inventing artificial consciousness or intelligence, then it runs the risk of being destroyed by its very own creation.
This is a clear warning throughout many scifi movies and series.
Creating consciousness is not something mankind should be playing around with, because that realm belongs to God. Also by definition that consciousness or intelligence will not have a soul, and therefore it will be amoral and evil. Mankind can't create anything pure. Only God can do that. So almost by definition, artificial life cannot be "good."
In the end it is likely that such inventions will "rise up" to slaughter their masters (mankind), enslave them, dominate them, and subjugate them.
At first it may seem like this is a joke. But then consider, how so many films repeat this theme over and over in different ways, like parables and proverbs that tell the same story in different ways to enlighten understanding.
I think this is true, there's something I learnt from the 2014 sci-fi movie Transcendence. A scientist was speaking to the wife of another scientist(his colleague) who had transcended his consciousness into a machine after his death. Here is what he said - 'I spent my life trying to reduce the brain to a series of electrical impulses, I failed. Human emotion, it can contain illogical conflict - to love someone, and then hate the things that they've done, the machine can't reconcile that.' There's something about nature and consciousness that humans don't have to play around with or try to create another kind, we might end up doing more harm than good.
It's hard to tell if you're being serious. Leaving aside the obvious--that God and associated myths are themselves an invention of man--it's silly to fear actual not-science-fiction A.I. If it gets to where A.I. is even capable of doing those things, countless possible safeguards could be put in place, assuming even that much becomes necessary. But we are nowhere remotely near the point where we would even attempt to begin to try to start to even imagine how we would worry about any of that. Rather, predicted dates for when the Turing Test will be passed tend to be getting postponed.
Sci-fi films are made not to warn us but to entertain us, and sometimes to be thought-provoking. Ex Machina is a really good one. And of course Blade Runner.
Another implication from these kinds of films is that the AI becomes what it is because of us: it learns who to be from us.
And when the AI is aggressive and destructive, it is either because humanity was first, or it was defending itself from humanity's aggression.
If true AI is possible, then humanity will probably be dealing with this within decades.
It seems a few people are starting to wake up to the realization that it is a terrible idea to make soulless artificial intelligence, because there will be no moral compass, no consequences, and no attachment to humanity to ground them.
Killer robot TERROR: UK and US warned AI brains can be 'radicalised' for MASS MURDER
ROBOTS could be prone to radicalisation leading to sick ISIS-style terror attacks, according to a chilling warning.
https://www.dailystar.co.uk/news/world- ... orism-isis
Tech leaders: Killer robots would be ‘dangerously destabilizing’ force in the world
Thousands of artificial intelligence experts are calling on governments to take preemptive action before it’s too late.
https://www.washingtonpost.com/technolo ... 43afe01ad3