March 03, 2004
Transhumanism.
Vernor Vinge on the Singularity. A rather old article, but a good one. Wrap your frontal lobes around this one, fellow domesticated primates.
"Within [twenty] years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." Good!
-
I've Furled this, and plan to read it tomorrow... (looks fascinating)
-
"Within [twenty] years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended." They have said that for twenty years already. I still believe the planet will end up covered by a grey goo.
-
Meh. We'd have to be able to design machines that could replicate that way. Right now, we aren't even close. Also, t'wouldn't be that hard to rig up a fuse on the things. If generation x, then stop or the like.
-
Magnets will save us from the killer robots.
-
I don't know but it seems to me that creating an intelligent machine is quite far off in the future or maybe unattainable. Not because we don't have the capacity but because we don't understand what is intelligence is all about. Most people assume that creating an intelligent machine implies creating something that things as a human or better. That is, that it will reach the same logical conclusions that us and it will judge reality by the same standarts than us, only better. The truth is that most of our thinking and reasoning depends on great extent on our biological origins and shortcomings. We usually interpret and handle reality in a way that goes hand in hand with our evolved biological nature. That wouldn't be the case for machines. Their thinking will depend on how we build them and their purpose. If we build them to reach maximun efficience then it's more probable the grey goo scenario than any kind of utopia. If we desing them to protect humanity they could reach the logical conclusion that it's better we don't exist at all. But the most probable scenario is that machines won't have any kind of free will and they'll be always our loyal slaves since it's to risky to give them self-preservation and reproductivity instincts. In that last scenario, the singularity will never be reached unless we decide to merge ourselves with them and distill our human nature little by little, which probably will take a heck of a long time.
-
I agree, Zemat. I'll write a longer reply when I've got some more time, probably. Bet you all can't wait.
-
Ok, these articles are old, but you chimps do not appear to be accounting for the latest advances, nor using your own neural networks to their optimum functionality. ;) You don't need to understand intelligence to create an intelligent entity. You merely need to supply the base structure (using whatever material & approach viable @ the time), supply stimulus thru' sequenced environmental variables, feed the thing with nutrients along with stress/counter-stress input, and *watch the thing achieve intelligence (read: self awareness) on it's own*. Essentially this is evolution in microcosm: our basic problem is to accelerate (and monitor) it. Given the technological ability to create a synthetic neural net, it would be not much different to training a pet, or, on a more sophisticated level, teaching a human child. We know how neural networks form. We understand their basic biochemical composition and behaviour. Sure, we cannot replicate a sophisticated thinking organism's brain structure in all its complexity synthetically at this time, nor can we maintain sustainable, living neural tissue in an artificial environment quite yet, either. But these are mere technological obstacles, not inviolable cosmic laws. A mere decade ago, the cochlear implant was the stuff of science fiction. Right now, artificial 'brain circuitry' is not just on the drawing board, it is undergoing basic testing on quite sophisticated mammals, such as the rat (poor wee bastards). Example of failed linear projection, the 1st: the concept of Nanites becoming a planet-suffocating grey/green goo is not tenable. Nanites, by their very nature, have to be self sustaining and interface with their environment independantly. To postulate these synthetic organisms reproducing without restraint is linear-monkey-think, and basically juju; superstition. The Nanites' fundamental imperative has to be to maintain their own survival, it is their central attribute. Uncontrolled replication would pretty much put an end to that long-term survival for the entire 'species' since they would soon overwhelm their environment, eclipse their base resources, and ultimately choke themselves in the long term. Such a basic flaw would be ironed out in the first stages of primitive lab prototypes, or the design would fail in the petri dish. This is the same control mechanism at work in 'naturally evolved' organisms, and there's no reason to abandon the restrictions at work in the organic archetype based on some vague, undefined criteria of monkey fear-magic. In fact, a successfully designed self-replicating Nanite would probably achieve this without some kind of crude '3-laws' impediment imposed by its designers.
-
If we're talking about a technological level where truly autonomous self-replicating molecular-sized 'machines' can be created by our civilization, you are talking about a very high degree of sophistication indeed. This is not going to be a 'mad professor' hit & miss affair. These obvious problems will have been dealt with long before the Nanite can be actually created. Any of these behavioural damping mechanisms should basically be an analog of their most successful 'natural' archetypes, and be constrained by the same boundaries. Consider, a virus genotype will quickly become extinct if it actually destroys it's own host (ok, let's leave virulent strains like Ebola out of this equation for now, 'cos they are the exotic exception to the rule), and most virii (& bacteria, etc) have evolved within this constraint, all uncounted gazillions of them, since the dawn of life on Earth. These postulated Nanites, like everything else, are constrained by their resources, their self-metaprogrammed survival imprints, & by morphology. The 'grey goo scenario' is a Frankenstein's Monster paranoiac fear of linear-think monkeymen; it is not based on probability consistent with the overall technological scenario, as far as I can see. Concerning the original point, consider: slug & bivalve brains are currently being fused with simple silicon chips. Other researchers are experimenting with replacement of damaged/malfunctional hippocampus tissue with artificial processors. This research is at the stage of implanting these chips into rat brains, but is limited by the nature of the electrical basis of the chips' function. A significant technological advance will be needed to take this the necessary step further. This *will* occur. It is a statistical certainty based on our logarithmically avalanching advances over the last century alone. Now where are the boobtits? I need a beer and a big, big bucketbong.
-
The 'grey goo scenario' is a Frankenstein's Monster paranoiac fear of linear-think monkeymen... Who are you calling linear-think monkeyman, Huh? My own comment is crude, which reminds me I should not make comments with time constraints. The "grey goo scenario", as I imagine it, isn't based on an uncontrolled replication of bad engineered nanites, but on a controlled one, by a deranged AI bent on achieving maximun efficiency for everything. Think about it, there's not much difference between fire and life. Both are in the end combustion processes. But natural selection on a physical scale prefers life over fire because it is more sustainable due to it's complexity and malleability. Life can substract energy more efficiently and in greater quantities with less energy input from external sources, which in turn gives life the capacitity to tap on more energy sources than fire. Life has evolved then in a way that prevents more energy wasting processes from taking place. Once we create an AI wich corrects and expands itself it will recognize all life on earth as energy wasting processes wich consume too much resources. Then it will proceed to wipe it out and replace it for far more efficient nanites. After that, it will try to expand itself to other planets to continue the controlled combustion of every resource on the galaxy. And I'm not fearmongering, it's just an hypothetical scenario based on a drunken thought.
-
On the artificial intelligence issue. I still think that we'll be far from achieving it as we currently understand. From what I have learned about neural networks. Without proper enviroments and correctly implemented evolutionary constraints they won't achieve nothing near animal consciousness. Given enough "brain power" we can make them seem familiar to us. Yet without giving them the gift of selfishness (in a dawkinian sense) they will never become something more than tools for us.
-
in a dawkinian sense Er, scratch that...
-
I wasn't, uh, directing my comment at you, particularly, Zemat, and did not mean to imply you are a scaremonger or anything. Sorry about that, old chap. More at the, uh, general idea monkeys that surround us, and bind us. Luminous beings are we, not this crude monkey chatter. You must feel the Bananas around you; between you, me, that tree, some bird over there, my butt (parp) oof Please remember that I am most always either drunk or very, very stoned. I never mean to offend, even though my pants suggest otherwise.
-
I wasn't offended, just playing along. I usually write with a heavy dose of caffeine on my system so it's more probable that I'm the one offending monkeys at random.
-
my name says it all...i'll always believe in the power of life. in medical jargon, dx=diagnosis hence i am diagnosed as a lifer....sorta a pro-lifer with a different paragigm... i just believe in the power of life as a personal miracle that we all have and must cultivate... *she says, going off on a tangent.*
-
I just found another link on mind-computer interface: Transforming Thoughts Into Deeds from Wired News. Bit old, but.
-
Will Super Smart Artificial Intelligences Keep Humans Around As Pets?
-
Damn fine read, h-dogg!