Book: An Introduction to Neural Networks
Long time ago, when I was getting introduced to artificial neural networks I had read this book a few times. I was recently having a look at an ANN code and because of this I decided to write a review about this book.
Title: An Introduction to Neural Networks
Author: Kevin Gurney
Chapter 1: Neural Networks – an overview
This introduction chapter starts from zero knowledge of ANNs and explains to the reader what neural networks and artificial neural networks are, how they work in a very abstract way, why computer scientists study them etc. Every chapter of this book includes a summary and notes section at the end of each chapter which is common among most academic oriented books but it’s definitely useful.
Chapter 2: Real and artificial neurons
Of course, K. Gurney begins by analyzing the fundamental component of a neural network. The neuron. After providing an overview of how real nerve cells (aka. neurons) function; he moves to the artificial ones and discusses basic concepts like the Threshold Logic Unit (TLU), neuron’s signals, the time factor etc.
Chapter 3: TLUs, linear separability and vectors
This chapter goes a little bit deeper in TLUs by discussing more complex issues on two input TLUs. It goes through pattern classification, linear separation of different classes etc. before moving to vectors and more specifically on vector addition and scalar mutliplication. It concludes with more details on linear separability.
Chapter 4: Training TLUs: the perceptron rule
The first practical chapter for ANNs. I’m saying this beacuse this is the introduction to neural networks’ training and it deals with training subjects such as threshold, weights, adjusting weights etc. It gives a nice introduction to the perceptron and then moves to multiple nodes and layers networks and ends up with some practical issues.
Chapter 5: The delta rule
Anyone who has been involved in ANNs projects knows that this is probably the most essential knowledge for understanding the internals of an ANN. Here, the author explains in detail the concept of gradient descent and how this is applied on an error. This leads us to the delta rule. Finally, some practical examples are provided for better understanding.
Chapter 6: Multilayer nets and backpropagation
After the excellent introduction of chapter 5, this chapter moves to more useful information including different training rules used in multilayer networks, the most common training algorithm, the backpropagation, training speed concepts, generalization, etc. It’s a nice introduction to the various variables that one has to consider when creating or managing an ANN.
Chapter 7: Associative memories: the Hopfield net
Beginning with an introduction to associative memory, the author introduces the reader to the neural networks with physical analogy of associative memory; better known as Hopfield Network. He then discusses Hopfield specific subjects like finding the weights, storage capacity, combinatorial optimization etc.
Chapter 8: Self-organization
That’s another neat way to introduce Kohonen map which is basically a self-organizing map. To do this, K. Gurney also includes competitive learning and competitive dynamics section as well as a section dedicated to principal component analysis.
Chapter 9: Adaptive resonance theory: ART
Here, Gurney starts from the beginning by identifying the adaptive resonance theory’s objectives before moving on to more complex subjects like the description of the networks, the ART family etc.
Chapter 10: Nodes, nets and algorithms: further alternatives
Chapter 10 doesn’t go into any great detail. It just informs the reader of various concepts that he should consider while working with ANNs. Those include more details on how synapses function in both biological and artificial world, sigma-pi units, digital neural networks etc. It’s a pretty interesting chapter in my opinion.
Chapter 11: Taxonomies, contexts and hierarchies
This is the final chapter of this book and it simply classifies the ANN based on their use. Here you can find information for classification ANNs, computational hierhachy of ANNs, ANNs for statistical analysis, ANNs and intelligent systems. This chapter ends up with a historical overview of neural networks.
So, I’ve read this book quite a few times and I still have the impression that it’s a great book for getting introduced to neural networks. You aren’t going to become an expert with the information provided in this book but you’ll definitely have a grasp of how ANNs function. In addition to this, it is written in an excellent and understandable manner (for an academic book) and it’s just about 230 pages long. I think it’s a good book. :)