April 29, 2005
Meet Cyc (pronounced "psych")
the next step toward artificial intelligence...this time with common sense and the ability to watch your webcam to learn about you. (insert inevitable Terminator "Skynet" reference here)
-
I am skeptical. It'll be world-changing if it happens, hence why it's referred to as a "singularity", but I'm not going to hold my breath. Using a knowledgebase is an old technique that has been tried without success, and personally I think trying to build such a system in this manner is doomed to failure (though I welcome being proven wrong). I think proper AI (or pseudo-intelligence, or whatever we end up getting) will come from something closer to Fluid Analogies or its ilk. It seems a far better course of research than the "Let's Make a Working Brain" Top-Down approach.
-
this time with common sense and the ability to watch your webcam to learn about you. If it really had any common sense, it wouldn't want to watch what I'm doing in my webcam.
-
I am awaiting the Singularity with rabid curiousity. Aside from that... I'm skeptical of the ability of a system that uses symbolic logic exclusively to be able to generalize sufficiently to not be rather easily stumped, at some point. I think it's much easier to write more powerful systems using a statistical inference framework. That said, I think the real solution is a combination of the two. In any case, no system can be expected to exhibit "common sense" without an extensive database of rules and relations to base its inferences off of. And they need to be general rules, so the system needs rules about how to construct general rules from specific observations (i.e. make inferences) if it's going to actually learn. So the knowledgebase can hardly be gotten rid of entirely. Disclaimer: I'm studying neural nets and statistical inference models of learning and natural language processing; my professor very much prefers statistics to symbolic logic so we haven't spent a lot of time learning about the alternatives. I'm sure I'm biased. =)
-
If you make robots, they will kill you all trying to seek their God.
-
I think your professor is right to regard symbolic logic as pretty hopeless for these purposes, shandrin. After 22 years of accumulating an ever-larger database, I think it's about time CYC showed some proper results, if it's ever going to. Alas, I doubt whether statistical inference is a good enough approach either. All these formal techniques are special cases of a much wider reasoning capacity which allows human beings to deal with things like relevance, which formal methods don't seem able to cope with. Personally, I don't think this wider reasoning ability is understood at all - I'm not even sure we can describe it accurately. But perhaps that's a bit pessimistic.
-
The first post in this mefi Cyc thread said it best: 1989 called, they want their future back.
-
-
Interesting stuff, Fuyu, but I intended something slightly different. The logics you mention, if I understand it correctly, introduce the notion of relevance with a view to eliminating some of the counter-intuitive aspects of classical logic. But they rely on our already having an intuitive understanding of what relevance is, and indeed, of which things are relevant: they offer methods of applying relevance, but not a way of analysing or calculating relevance. But I believe that's what CYC needs to help it decide which of the infinite number of inferences it could draw from its data are actually worth having. What I had in mind was the kind of thing Sperber and Wilson and others have attempted to tackle in the field of pragmatics. They rejected the "code model" but went on to propose a formalism of their own which IMHO is equally hopeless. Happy to be educated if I'm talking bollocks, of course.
-
-
Couldn't disagree - your point about 3, the Bible, etc catches the point I intended quite nicely.
-
On the other hand, in real systems there often isn't a clear separation of domains. Any database of facts that attempts to be faithful to the compendium of human knowledge will have too many interconnections. Which is where, incidentally, the strength of Fluid Analogies comes in. The theory being that we are creatures who are geared to make analogous situations about information we already know (you know how to add integers, now do the same thing with decimals, for a simple example, or do the same thingwith Johnny and Alice, for a slightly more illustrative example). The trick is that right now fluid analogies only work in microdomains that would be difficult to translate into a functioning human-like brain. Of course, that's one of the reasons it seems more likely, because it's not trying to say that we understand everything and can pull it off from scratch, it's saying we think we have this little part, let's see if we can build it from there.
-
That's always been the problem with AI, though, hasn't it, Sandspider - the programs work well in restricted domains, but once you start to add realistic complexity, they run into combinatorial explosion or the frame problem and promptly fall over? I speak from a standpoint of complete ignorance of the book, so I may be missing the point. What's the difference between a Fluid Analogy and an ordinary one?
-
I thought a Fluid Analogy was one I'd spilled my soda on. my apologies; can't get serious today
-
Plegmund, the generic AI approach is to try to mimic human intelligence in a general, wide-open sort of way, usually in an effort to pass the Turing Test by holding a conversation with a human. Invariably, and I mean invariably, the "AI" will fail the test spectacularly. The Fluid Analogies Research Group, on the other hand, tried to make something that could be creative in a very limited way. They started with something called "Copycat", where the computer is given an alphabet and a few rules (such as "Adjacent Letter" and "Forward" and "Backward" and such), and you give it an analogy to try to copy. A simple example is "A -> B as C -> ?" and the computer would generally answer "D". If it were having a bad day, it might answer "B", but usually not on such a simple example. Things get really interesting when you give it harder analogies, like "A -> C and Z -> ?". The computer doesn't know wraparound, so it couldn't say "B", but it might say "X", figuring that A and Z are both border cases, so you want something two spaces farther away from the border. And there are some even more interesting examples available. Anyways, they're working on expanding the domains, and it would be interesting to see if, eventually, research like this could make a creative computer. I highly recommend Fluid Concepts and Creative Analogies if you have interest in things like artificial creativity or the cognitive psychology.