Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifelong learning. The interplay of these elements leads to the emergence of biological intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) employ simulated evolution in-silico to breed plastic neural networks with the aim to autonomously design and create learning systems. EPANN experiments evolve networks that include both innate properties and the ability to change and learn in response to experiences in different environments and problem domains. EPANNs’ aims include autonomously creating learning systems, bootstrapping learning from scratch, recovering performance in unseen conditions, testing the computational advantages of particular neural components, and deriving hypotheses on the emergence of biological learning. Thus, EPANNs may include a large variety of different neuron types and dynamics, network architectures, plasticity rules, and other factors. While EPANNs have seen considerable progress over the last two decades, current scientific and technological advances in artificial neural networks are setting the conditions for radically new approaches and results. Exploiting the increased availability of computational resources and of simulation environments, the often challenging task of hand-designing learning neural networks could be replaced by more autonomous and creative processes. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and possible developments are presented.
Over the course of millions of years, evolution has led to the emergence of innumerable biological systems, and intelligence itself, crowned by the evolution of the human brain. Evolution, development, and learning are the fundamental processes that underpin biological intelligence. Thus, it is no surprise that scientists have tried to engineer artificial systems to reproduce such phenomena (Dawkins, 2003; Sanchez et al., 1996; Sipper et al., 1997). The fields of artificial intelligence (AI) and artificial life (AL) (Langton, 1997) are inspired by nature and biology in their attempt to create intelligence and forms of life from human-designed computation: the main idea is to abstract the principles from the medium, i.e., biology, and utilize such principles to devise algorithms and devices that reproduce properties of their biological counterparts. One possible way to design complex and intelligent systems, compatible with our natural and evolutionary history, is to simulate natural evolution in-silico, as in the field of evolutionary computation (Eiben & Smith, 2015; Holland, 1975). Sub-fields of evolutionary computation such as evolutionary robotics (Harvey, Husbands, Cliff, Thompson, & Jakobi, 1997; Nolfi & Floreano, 2000), learning classifier systems (Butz, 2015; Lanzi, Stolzmann, & Wilson, 2003), and neuroevolution (Yao, 1999) specifically research algorithms that, by exploiting artificial evolution of physical, computational, and neural models, seek to discover principles behind intelligent and learning systems. In the past, research in evolutionary computation, particularly in the area of neuroevolution, was predominantly focused on the evolution of static systems or networks with fixed neural weights: evolution was seen as an alternative to learning rules to search for optimal weights in an artificial neural network (ANN). Also, in traditional and deep ANNs, learning is often performed during an initial training phase, so that weights are static when the network is deployed. Recently, however, inspiration has originated more strongly from the fact that intelligence in biological organisms considerably relies on powerful and general learning algorithms, designed by evolution, that are executed during both development and continuously throughout life.