Artificial Neural Network

Polychronization: Computation With Spikes describes the idea of polychronization, a concept in ANNs that explain the advantage of time delays within neural circuitry sometimes thought to be a stumbling block when computationally modelling ANNs. Axonal propagation delays existing between neurons pose difficulty as that translates from a finite to infinitely dimensional system in mathematics. However, these delays result in a maximal postsynaptic response due to the different locations of the neurons within a particular polychronous group, thus the nearer neuron will have a longer delay leading to a simultaneous arrival of spikes. An interesting result developing from polychronization is that a neuron being capable of being a member of multiple polychronous groups would lead to a combinatorial explosion in the number of possible polychronous groups within the brain thus resulting in a significant increase in memory capacity.

From Wikipedia,

In neuroscience, the term polychronization describes the process of generating reproducible time-locked but not synchronous spiking patterns with millisecond precision. The term is derived from Greek poly meaning many and chronos meaning time or clock. Spiking neurons with axonal conduction delays and spike timing dependent plasticity (STDP) can spontaneously self-organize into groups and exhibit multiple polychronous patterns. These patterns represent memory, and their number often exceeds the number of neurons, or even synapses, in the network.

Polychrony

Polychrony occurs when events happen with consistent pattern of timing within themselves. To illustrate, consider the following situation. Every day Bob, Bill and Berta go to school according to these constraints:

  • Bob goes to school first
  • Bill goes to school 5 minutes after Bob
  • Berta goes to school 20 minutes after Bill

The three events are clearly not synchronized, but they are polychronized, because they occur at fixed intervals from another and they occur regularly. Extending that example to neurons, assume we have four neurons A,B,C,D with the following constraints:

  • A,B,C have synaptic connections with D
  • A takes 10 ms to signal D
  • B takes 20 ms to signal D
  • C takes 50 ms to signal D

If they fire according to the below scheduleding sequence they will be polychronized

  • C fires at time 0
  • B fires at time 30
  • A fires at time 40

Polychrony does not imply that events finish at the same time, we could have a situation such as this:

  • C fires first
  • B fires 60 seconds later
  • A fires 400 seconds later

and still be polychronized. As long as they are firing with a consistent pattern of timing within themselves, we say that the events are polychronized. It should be noted that synchrony, is a special case of polychrony, with the interval between the multiple events being zero.

Polychronized Spiking

Classical neural computation involves neurons and synapses, and inter-neuronal information being passed on via their firing rates. Therefore, in order for the connection between two neurons to become stronger, the presynaptic neuron has to fire off more quickly to the postsynaptic neuron. Likewise, if the presynaptic neuron fires off less rapidly, its connection with the postsynaptic neuron will become weaker. Eugene Izhikevich does not refute this model, but proposes the following addition to it:

If neurons A, B and C are polychronized such that their spikes arrive at neuron D at the same time, their respective connections with D will be strengthened more so than had they arrived independently

Complexity

An intriguing consequence of this view is that the number of polychronous groups exceeds the number of neurons in the brain. At any given time, a neuron can be a member of more than one group, therefore the capacity of the brain may be larger than once conjectured. At any given time there can be on the order of 2n groups because of subset construction.

From Wikipedia,

An artificial neural network (ANN), often just called a "neural network" (NN), is an interconnected group of artificial neurons that uses a mathematical model or computational model for information processing based on a connectionist approach to computation. In most cases an ANN is an adaptive system that changes its structure based on external or internal information that flows through the network.

(The term "neural network" can also mean biological-type systems.)

In more practical terms neural networks are non-linear statistical data modeling tools. They can be used to model complex relationships between inputs and outputs or to find patterns in data.

Background

There is no precise agreed definition among researchers as to what a neural network is, but most would agree that it involves a network of simple processing elements (neurons) which can exhibit complex global behavior, determined by the connections between the processing elements and element parameters. The original inspiration for the technique was from examination of the central nervous system and the neurons (and their axons, dendrites and synapses) which constitute one of its most significant information processing elements (see Neuroscience). In a neural network model, simple nodes (called variously "neurons", "neurodes", "PEs" ("processing elements") or "units") are connected together to form a network of nodes — hence the term "neural network." While a neural network does not have to be adaptive per se, its practical use comes with algorithms designed to alter the strength (weights) of the connections in the network to produce a desired signal flow.

These networks are also similar to the biological neural networks in the sense that functions are performed collectively and in parallel by the units, rather than there being a clear delineation of subtasks to which various units are assigned (see also connectionism). Currently, the term Artificial Neural Network (ANN) tends to refer mostly to neural network models employed in statistics, cognitive psychology and artificial intelligence. Neural network models designed with emulation of the central nervous system (CNS) in mind are a subject of theoretical neuroscience.

In modern software implementations of artificial neural networks the approach inspired by biology has more or less been abandoned for a more practical approach based on statistics and signal processing. In some of these systems neural networks, or parts of neural networks (such as artificial neurons) are used as components in larger systems that combine both adaptive and non-adaptive elements. While the more general approach of such adaptive systems is more suitable for real-world problem solving, it has far less to do with the traditional artificial intelligence connectionist models. What they do however have in common is the principle of non-linear, distributed, parallel and local processing and adaptation.

Unless otherwise stated, the content of this page is licensed under the Eiffel Forum License 2.