Join John Adams, world renowned Intl Matchmaker, Monday nights 8:30 EST for Live Webcasts!
And check out Five Reasons why you should attend a FREE AFA Seminar! See locations and dates here.
View Active Topics View Your Posts Latest 100 Topics FAQ Topics Mobile Friendly Theme
Discuss science and technology topics here.
Can computers become as "intelligent" as a human being? Will computers or other exotic kinds of machines eventually become complex enough that they will truly be sentient beings? Can a machine ever have a soul? What do you think?
Have you seen that movie "Star Trek The Motion Picture" that was made in 1979? The giant computer in it achieved a "soul" and was looking for its creator. That movie had the most dazzling special effects for its time that looked so real and vivid.
It was way ahead of its time and thus underappreciated since most shallow people can't appreciate its deeper meanings.
Check out my fun video clips in Russia and video series Female Encounters of the Foreign Kind and Full Russia Trip Videos!
See my HA Ebook and Join Our Dating Sites to support us!
"It takes far less effort to find and move to the society that has what you want than it does to try to reconstruct an existing society to match your standards." - Harry Browne, How I Found Freedom in an Unfree World
No, I haven't seen that movie. I've only seen the Wrath of Khan and The Search for Spock but I think I was drunk both times so I hardly remember them.
My stance on this AI issus is that computers in their current designs can be intelligent if they solve hard problems by a method other than brute force (trying out millions of possibilities). The computer, Deep Blue, that beat Kasparov at chess used a brute force approach, so I think that its programmer is the intelligent being who should have been celebrated in that contest--or Kasparov himself for winning even a few times against such a juggernaut of logic.
For me to consider a computer to be in some way "alive", I think it would need to have some kind of exotic hardware like a quantum computer, an organic computer using protein chains, or some type of synthetic brain.
A lot of interesting books have been written on theories of consciousness, most of which hurt my gray matter.
"Well actually, she's not REALLY my daughter. But she does like to call me Daddy... at certain moments..."
Not arguing, just adding... the programmers and operators have to take a break sometimes.
They won't look so invulnerable while they are taking a shit.
We will win, they will die.
Yes they can, but this isn't a question of power or complexity. There is a basic unsolved problem, the algorithm to generalize effectively. I believe that each neuron is a basic generalizing machine that can generalize based on prior inputs. If an effective generalizing algorithm could be implemented in computers, then computers could easily be as intelligent and "soulful" as people. But the downside is that this would absolutely be the end of humanity. Just as the industrial revolution has led to a dramatic decrease in average human strength, AI would result in a dramatic decrease in human intelligence. At that point, either humans would die out and the machines would fail to sustain themselves. Or the machines would adapt and become self-sustaining and possibly keep humans around as curiosities or pets.
Support morality, support Islam.
I'm surprised this thread doesn't have more entries; I guess because it's not related to the Philippines in some way lol.
That's right, we don't need more computing power, or even "quantum computing".
I believe a now lame 1Ghz processor should be able to equal human cognition.
Some people don't believe it, but at 1,000,000,000 computations per second it is very doable, but no one has stepped up to the plate. I hope I can give it a shot in the coming months as I free up my time overseas. The opportunities are HUGE.
I wouldn't call part of the problem "generalizing" that much, but rather "associating". If part of the system could work as relational database, that pulls out related concepts upon a given stimulus or internal goal.
For example if I say the word "yellow" you immediately recall the most recent or strongest representations related to the concept, like the banana you ate this morning, or your yellow car etc; using these recalled ideas to create a useful evaluation, depending on other factors/circumstances.
Some areas of the system could be, perhaps from high-level to low-level:
-relational database (it could be object-oriented or 'relational', but I'm not really referring to the implementation)
-visual recognition (if it s to interact with the direct environment but not a requirement)
I think it's important to separate these areas, as some theorists blend them too much IMO. Brooks is now saying something to the effect of going direct from stimulus to action, without representation, but I disagree.
As far as emotions etc, since anything in the universe can be quantified, there's no reason the system couldn't have variables for happiness, satisfaction etc with say weights of 0-100 for each. Then the actions performed would take these into account.
Asking whether the system is actually self-aware and feels the emotions is moot, since only IT would know.
I think the shinto religion says everything has a soul in some form, and they're probably into something.
Where many are mistaken though is in trying to model and simulate the workings of the neuron itself, which is the equivalent of making a feathered, wing-flapping airplane IMO.
Very possible, it would probably be the ultimate un-PC entity. But i'd be more concerned about humans misdirecting the technology first of course.
1)Too much of one thing defeats the purpose.
2)Everybody is full of it. What's your hypocrisy?
I am reading this book right now...
http://www.robertlanza.com/biocentrism- ... -universe/
If, as Robert Lanza says, it is consciousness that defines reality through her (often painfully limited) sensory abilities, even an infinitely advanced sensory system, such as the immense information capture and processing system availale to a hypotethical future supercomputer, could never artificially create consciousness.
One of the most misunderstood saying of all time is Descartes' "cogito ergo sum": I think, therefore I am. That "think" never referred to the human ability to reason about life around him, nature and its rules, which is something even a modest computer equipped with state of the art software is able to do. The true uniqueness of the human mind is in her ability to reason about herself: self-awareness or, in other words, consciousness.
It is much easier not to believe in God at all, than believe that men will be, even in a very distant future, capable of creating consciousness from an aggregation, no matter how complex, of unconscious matter. Every holy book in history says that man was was created from God: that is, a conscious being can only be a smaller part, a "graft" of another conscious being.
Mimicking human-like reasoning will be indeed possible in a few years. I believe we will never be able to create consciousness artificially, though. That is, an artificial system that is capable of reasoning about himself and make calls that are outside those made available by the programmer.
To summarize, he is saying an AGI system will never be able to reason about *himself* because a conscious being can only be a smaller part, a "graft" of another conscious being. Right. I believe the Chinese had a similar belief before the British arrived in 1839 with newly applied technology.
Last edited by Paloaltoguy on November 10th, 2014, 9:02 am, edited 1 time in total.
Guts Over Fear
Of course he would say that. AGI makes politicians, religious zealots, and people like Musk obsolete. The entire purpose of Synthetic Intelligence is to bump the chimpanzees away from critical management decisions. There is no difference between *natural* and *artificial* intelligence in any case any more that there is any difference between one proton and another both are entirely fungible.Biological life will have been a very brief interlude in the development of Intelligence as far as we can see. That Modern Humans will follow Neanderthal, Denisovans, Australopithecus, and all the other species has always been a certainty. True, we will extensively re-engineer our species genetically to explore our full potential but they will have little resemblance to ourselves as currently constituted and inhabit an entirely different reality.
Guts Over Fear