Neurons

Neurons have been variously described.
In neuroscience and biology, cells that make up the nervous system.
In ML, fundamental units of neural networks that act as accumulators.
In psychophysics, feature detectors.

Biological neurons, properties, and parallels in ML

Neurons accumulate incoming signals and emit their own rate-coded signals to downstream neurons in an all-or-nothing way (McCulloch-Pitts neuron used in ANNs), adapting to wide dynamic ranges (Batch normalization), communicating across complex (?) and stochastic (Drop-connect) synapses.

(Edgar Adrian discovered spikes, rate-coding and adaptation)

Given that direct electrical communication was an option available to evolution (Gap junctions), the fact that chemical synapses came to dominate is interesting. This might have to do with reliability/energy/biochemical constraints, but it might also have been due to the added possible complexities of chemo-synaptic and neurotransmitter dynamics. ANN synapses today are as simple as can be. When, how, and whether to use complex synapses is an interesting question.

Neuron models

  1. Hodgkin-Huxley: bio-realistic conductance-based model. Computationally expensive.
  2. Izhikevich: simplified 2D quadratic integrate-and-fire model. Computationally cheap but still captures a lot of different spiking behaviour.
  3. Integrate-and-fire: workhorse of spiking neural networks. Computationally cheap, simple voltage-based model.
  4. McCulloch-Pitts: OG neuron model, extremely simple binary threshold units.

Links

Sources