Artificial Intelligence Learning – The key is playing

512902916

BIGFUN AI in game development.

Historically AI has involved creating a rule-set and applying those rules with some feedback loop, such as a PID controller or neural network.
A large training set was required to give the processor options, and there was always the danger of over-training the system and getting artificially biased result.
Modern AI systems use a no-rules approach to learning, allowing the system to generate its own rules.

A prime example is object recognition. Teaching a machine to recognize cats used to involve storing the edges of hundreds of pictures of cats and comparing with these trained data sets to deduce a probability of cat-ness, given an arbitrary input image. The results were far from perfect, a cat with an unusual posture was seldom recognised because the algorithm was looking for things that the humans considered the epitomy of cat-ness, a clear silhouette for example. With modern storage being cheap and fast it is possible to feed in many videos of cats with less constrained rulesets, and then feeding in many videos of things that are not cats. It turns out the videos of things that are not cats are just as important to determining a cat.

Cross domain knowledge
Historically audio, video, word-based AI had radically different approaches to pattern recognition, with neural networks used as a common learning point. Modern AI algorithms are domain-independent, any data-set can be used to training. A word search and an image search, when paired can yield amazing results that humans can understand.

Pairs and relative data.
Modern AI uses relative data to determine features rather than absolute data. A relative analysis is independent of the signal variance as the learning is only concerned with changes to segments when matching patterns.

Breaking the rules.
One key difference in initial AI and modern AI is the removal of rules. It turns out that allowing a processor to create its own rules is vastly more efficient than forcing the data through a rule-set, the underlying hardware can tune the processing to suit the available system.

So how to you create a truly artificial being?
We want our being to teach itself. We let it play by adding a randomness to its learning, just as a baby randomly flails and creates sounds to test out the limits and reactions of the environment.
Buy human simulation aside, let’s start with a dot. Our dot requires some motivation to do anything, just as humans have motivations that evolved into reproduction, survival and luxury.
Our dot has a target position and a limited lifespan. We program our dot to spend a portion of its life playing, and discovering its own rules. The next part is achieving its goal. The final part is transferring its rules to others.