پیشرفت و آینده شبکه های عصبی مصنوعی پلاستیک توسعه یافته
ترجمه نشده

پیشرفت و آینده شبکه های عصبی مصنوعی پلاستیک توسعه یافته

عنوان فارسی مقاله: تولد تا یادگیری: الهام، پیشرفت و آینده شبکه های عصبی مصنوعی پلاستیک توسعه یافته
عنوان انگلیسی مقاله: Born to learn: The inspiration, progress, and future of evolved plastic artificial neural networks
مجله/کنفرانس: شبکه های عصبی - Neural Networks
رشته های تحصیلی مرتبط: مهندسی کامپیوتر، فناوری اطلاعات
گرایش های تحصیلی مرتبط: هوش مصنوعی، شبکه های کامپیوتری
کلمات کلیدی فارسی: شبکه های عصبی مصنوعی، یادگیری مادام العمر، پلاستیک، محاسبات تکاملی
کلمات کلیدی انگلیسی: Artificial neural networks، Lifelong learning، Plasticity، Evolutionary computation
نوع نگارش مقاله: مقاله مروری (Review Article)
شناسه دیجیتال (DOI): https://doi.org/10.1016/j.neunet.2018.07.013
دانشگاه: Department of Computer Science - Loughborough University - UK
صفحات مقاله انگلیسی: 20
ناشر: الزویر - Elsevier
نوع ارائه مقاله: ژورنال
نوع مقاله: ISI
سال انتشار مقاله: 2018
ایمپکت فاکتور: 8/446 در سال 2017
شاخص H_index: 121 در سال 2019
شاخص SJR: 2/359 در سال 2017
شناسه ISSN: 0893-6080
شاخص Quartile (چارک): Q1 در سال 2017
فرمت مقاله انگلیسی: PDF
وضعیت ترجمه: ترجمه نشده است
قیمت مقاله انگلیسی: رایگان
آیا این مقاله بیس است: بله
کد محصول: E10732
فهرست مطالب (انگلیسی)

Abstract

1- Introduction

2- Inspiration

3- Properties, aims, and evolutionary algorithms for EPANNs

4- Progress on evolving artificial plastic neural networks

5- Future directions

6- Conclusion

References

بخشی از مقاله (انگلیسی)

Abstract

Biological neural networks are systems of extraordinary computational capabilities shaped by evolution, development, and lifelong learning. The interplay of these elements leads to the emergence of biological intelligence. Inspired by such intricate natural phenomena, Evolved Plastic Artificial Neural Networks (EPANNs) employ simulated evolution in-silico to breed plastic neural networks with the aim to autonomously design and create learning systems. EPANN experiments evolve networks that include both innate properties and the ability to change and learn in response to experiences in different environments and problem domains. EPANNs’ aims include autonomously creating learning systems, bootstrapping learning from scratch, recovering performance in unseen conditions, testing the computational advantages of particular neural components, and deriving hypotheses on the emergence of biological learning. Thus, EPANNs may include a large variety of different neuron types and dynamics, network architectures, plasticity rules, and other factors. While EPANNs have seen considerable progress over the last two decades, current scientific and technological advances in artificial neural networks are setting the conditions for radically new approaches and results. Exploiting the increased availability of computational resources and of simulation environments, the often challenging task of hand-designing learning neural networks could be replaced by more autonomous and creative processes. This paper brings together a variety of inspiring ideas that define the field of EPANNs. The main methods and results are reviewed. Finally, new opportunities and possible developments are presented.

Introduction

Over the course of millions of years, evolution has led to the emergence of innumerable biological systems, and intelligence itself, crowned by the evolution of the human brain. Evolution, development, and learning are the fundamental processes that underpin biological intelligence. Thus, it is no surprise that scientists have tried to engineer artificial systems to reproduce such phenomena (Dawkins, 2003; Sanchez et al., 1996; Sipper et al., 1997). The fields of artificial intelligence (AI) and artificial life (AL) (Langton, 1997) are inspired by nature and biology in their attempt to create intelligence and forms of life from human-designed computation: the main idea is to abstract the principles from the medium, i.e., biology, and utilize such principles to devise algorithms and devices that reproduce properties of their biological counterparts. One possible way to design complex and intelligent systems, compatible with our natural and evolutionary history, is to simulate natural evolution in-silico, as in the field of evolutionary computation (Eiben & Smith, 2015; Holland, 1975). Sub-fields of evolutionary computation such as evolutionary robotics (Harvey, Husbands, Cliff, Thompson, & Jakobi, 1997; Nolfi & Floreano, 2000), learning classifier systems (Butz, 2015; Lanzi, Stolzmann, & Wilson, 2003), and neuroevolution (Yao, 1999) specifically research algorithms that, by exploiting artificial evolution of physical, computational, and neural models, seek to discover principles behind intelligent and learning systems. In the past, research in evolutionary computation, particularly in the area of neuroevolution, was predominantly focused on the evolution of static systems or networks with fixed neural weights: evolution was seen as an alternative to learning rules to search for optimal weights in an artificial neural network (ANN). Also, in traditional and deep ANNs, learning is often performed during an initial training phase, so that weights are static when the network is deployed. Recently, however, inspiration has originated more strongly from the fact that intelligence in biological organisms considerably relies on powerful and general learning algorithms, designed by evolution, that are executed during both development and continuously throughout life.