I have, for some time, been interested in Robotics and Computers. I would imagine that most of the visitors to this site, similarly, have at least a moderate interest in computers. Most recently, I have become interested in Artificial Intelligence. I am referring not to the "AI" often found in games - as this is rarely, if ever, a truly artificially intelligent machine. This document primarily chronicles my search for a true thinking machine.
Many types of "artificial intelligence" have been developed in an attempt to make a machine that resembles a thinking being. Unfortunately, the focus is often on exactly that - making an unthinking machine resemble a thinking being. Some recent developments show promise for actual thought in machines, which is a hopeful step toward the development of intelligent machines.
Rules-based AI has been around for a long time. This type, though it can simulate a human opponent in games, is in no way an actual thinking machine. Clever use of the rules can create a lifelike impression, but the ability of this type of AI to actually learn and reason is typically completely non-existant. That "learning" which does exist may merely store information to look for known patterns. No actual cognition is involved with this type of "AI."
"Fuzzy Logic" was developed as a way to make artificial intelligences less predictable. It can produce more lifelike results than traditional rules-based artificial intelligences, but when it comes down to it, it's still rules, with a little bit of depth added by making things less clear-cut. The learning ability of this is the same as traditional rules-based, but it wasn't long before that changed to form a new type of AI: the "Neural Network."
"Neural Networks" are very similar to "Fuzzy Logic," but are capable of adapting to perform a task more accurately, more efficiently, faster, or any combination of the above. The values representing the strength of a relationship between two "nodes" in the network varies based on the results of a given action. This can basically be thought of as educated guess-and-check. The network tries something with its current values, and if it was better than the last time, it reasons that it must be closer to the ideal values than it was last time. Through this process it should theoretically narrow down to the ideal values for the given conditions. It constantly attempts to get more resolution on its values so the leaning is ongoing, though it tapers off when the values get close to the ideals. If the situation changes, the values will perform the same process to narrow down the ideal values for the new situation. Though an interesting learning system, it is not as powerful as what might be considered its successor: "Genetic Programming" or "Genetic Algorithms."
"Genetic Programming" or "Genetic Algorithms" are very similar to neural networks, but with a number of advantages. They test numerous settings which vary wildly. Those with the best performance are merged while the worst performers are dropped. The "evolved" programs/algorithms are matched against each other and against new random versions. Through this process, the best performers should end up merged together and standing head and shoulders above the rest. In some systems, an element of "mutation" is thrown in whereby a successful program is tweaked somewhat in the hopes that that tweak will be an improvement. This type of AI has been demonstrated to eventually produce results similar to that which a human would produce, however it requires its goals and capabilities to be very clearly defined. "Genetic Programming" is capable of finding a good solution to a given problem, but is not capable of figuring for itself what problems need to be solved, unless that problem is part of a larger problem that has been defined for it. It is also limited to problems of measurable nature - it can only find a solution to something that can be attempted and have varying degrees of success. Something that is all or nothing - such as "keep yourself alive" is attainable only by fluke using this type of system. This characteristic is also shared by neural networks.
I am very interested in the development of artificial intelligences, however I do not feel that any of the systems that have been made public are suitable. Though they are very neat, I would like to make a machine that would be as intelligent, if not more so, than a human being. To this end I have come up with a couple of methods of producing artificial intelligence, though experimentation with them is far from complete.
This is a problem that I have struggled with for some time. It is also quite popular in some science fiction films and books. The artificial intelligence systems that I have planned out seek to accomplish their goals. They seek only to accomplish those goals by any means necessary. Though it sounds like something out of science fiction, they literally have no qualms about hurting people if their reasoning tells them that it will get them closer to achieving their goal. My end goal is to create a functional android, with a humanoid body and a thinking "brain." It sounds far-fetched, however, I am confident. Number 1, the AI schemes I've come up with may well be capable of independant thought, and Number 2, think about how far technology has come in the last 100 years. I've likely got about 100 more to go before I bite the dust. This problem must be solved, however, before I would dare to give an artificially intelligent machine a form in which it could concievably cause harm.
I recently had an idea of how this might be solved. I read recently that approximately one in a hundred people are born sociopathic.. though only a small fraction of sociopaths turn out criminal. Allegedly, this is a result of societal influences, primarily the fear of punishment and the knowledge that they will be punished if they were to perform a criminal act. I've been pondering on whether or not there is a way to program this sort of "fear of reprisal" into the artificial intelligence without compromising its ability to reason.
This is the first AI that I came up with. It resembles the way that human beings learn and reason. its experiences are catalogued and when faced with a situation, sorts its experiences by relevance and considers what it did in similar situations and what the outcome will be. Using that it predicts what the result of any given action will be and compares the favourability of that result to the predicted results from its other options. The option predicted to have the most favourable outcome is the action that will be taken. A little bit of randomness may be thrown in as well, as mistakes can sometimes be the best teacher. (So why not force them to learn the hard way sometimes? )
This is a new AI system that I thought up. It thinks very unlike a human being but instead attempts to break down and simplify goals into something concievable to the computer. Based on the desired end scenario, a plan is formed of the steps that might be taken to cause the current situation to resemble that of the AI's ideal sitation. These steps are further broken down into actions that would be required in order to complete those steps. Those steps are then broken down into actions that are required in order to perform the actions above it. Eventually, the individual actions will be broken down into tangible chunks that the computer can complete simply by setting a variable and waiting.
Dante: Well the precedence-weighted thing looks like a 'neural network' construct. The 'goal segmentation' thingy is something which is already used in AI research. Those systems are called planners. They work by breaking up Tasks into subtasks until they created a plan which can be directly executed. It's a really interesting domain. But it lacks learning capabilities. Genetic algorithms are also interesting but most of them need to much iterations to actually come up with something that might work.
All those attempts in AI research look to me as if they are just shrinked brute-force searches. All of those above for example miss the 'deduction' factor. They don't recognize the transivity of things. Getting a knot in my brain...
Nytrogen: The idea of allowing computers to make mistakes to teach them better sounds shaky to me. I think that a major advantage of computers is that they don't have to learn the hard way to get the lesson.
I hate to sound old fashioned, but I think that true computer intelligence could never be feasable, no matter the advance in technology. "Planners" sound unreasonable to me as well; a computer would have to need quite a few available "steps" in order to reasonably complete any "task" given it... as well as the problem of forethought as it relates to situations changing as steps are applied.
Dante: The idea behind planners is of course to perform changes in the current "situation". The changed situation (state) can then be looked at as if it is the new "current state" etc. I think it's a major disadvantage of most games that their AI isn't able to learn the hard way. They all try to simulate intelligence, and while nobody can currently clearly describe what intelligence is, I'm sure that learning is a big part of it. Doesn't it happen to you that you play a game and find a flaw in the AI. Mostly minor things, but it'd be cool if the "monster" or whatever would change behaviour on the next encounter of a similiar situation. (Of course this behaviour can be done by a pattern matcher)
Foxpaw: The precedence-weighted idea is actually quite different from a neural network. I don't know how to explain it other than what's above, but it's very different from a neural network. It does not just refine itself to perform a given task better - it's experiences can be applied to other tasks. I actually had a working implementation, but I got rid of it because I couldn't solve the HAL 9000 complex it had. I originally made it to play chess, then later changed it to checkers. Based on the experiences it had with chess, though the pieces were completely different in checkers it already had a half of an idea what to do.
I've never heard of a planner so I can't speak for it's similarity to the goal segmentation idea, they may be similar or different.
Tarquin: Could we bring this out of your WalledGarden? I think the long-term future of AI in gaming could be an interesting topic for this wiki as a whole
Foxpaw: Well, although my intention in developing sentient computers is to build androids with which to conquer Earth, I suppose the same concepts could be applied to gaming. I don't mind if you move it.