Feb 20 2019
A huge challenge for artificial intelligence (AI) is having the ability to look beyond past superficial occurrences to predict the underlying causal processes. A new study by KAUST and an international team of top experts has led to a fresh approach that goes past superficial pattern detection.
Humans have an unusually superior sense of intuition or inference that gives a person the understanding, for instance, to fathom that a purple apple could be a red apple exposed to blue light. This sense is so greatly developed in humans that people are also influenced to see relationships and patterns where none are present, thereby increasing the tendency for superstition.
It is a challenge to codify this type of insight in AI. Scientists are still figuring out where to begin: yet it signifies one of the most important differences between machine and natural thought.
A partnership was struck five years ago between KAUST-affiliated researchers Hector Zenil and Jesper Tegnér, and Narsis Kiani and Allan Zea from Sweden’s Karolinska Institutet. They started using algorithmic information theory to network and systems biology so as to resolve fundamental issues in molecular circuits and genomics. That partnership resulted in the development of an algorithmic approach to deducing causal processes that could form the foundation of a universal model of AI.
Machine learning and AI are becoming ubiquitous in industry, science, and society. Despite recent progress, we are still far from achieving general purpose machine intelligence with the capacity for reasoning and learning across different tasks. Part of the challenge is to move beyond superficial pattern detection toward techniques enabling the discovery of the underlying causal mechanisms producing the patterns.
Jesper Tegnér, Professor, KAUST.
This causal disentanglement, however, turns out to be highly challenging when a number of different processes are entwined, as is repeatedly the case in molecular and genomic data.
“Our work identifies the parts of the data that are causally related, taking out the spurious correlations and then identifies the different causal mechanisms involved in producing the observed data,” says Tegnér.
The technique is founded on a proven mathematical theory of algorithmic information probability as the basis for an ideal inference machine. The key difference from earlier approaches, however, is the shift from an observer-centric view of the issue to an objective analysis of the occurrences based on abnormalities from randomness.
We use algorithmic complexity to isolate several interacting programs, and then search for the set of programs that could generate the observations.
Jesper Tegnér, Professor, KAUST.
The researchers proved their technique by applying it to the interacting outputs of numerous computer codes. The algorithm discovers the shortest combination of programs that could build the convoluted output string of 1s and 0s.
This technique can equip current machine learning methods with advanced complementary abilities to better deal with abstraction, inference, and concepts, such as cause and effect, that other methods, including deep learning, cannot currently handle.
Hector Zenil, Professor, KAUST.