CONTENTS Passive Properties
Units: Passive property units Action Potentials (Spikes) Spike Conceptual Summary
Synapses
Networks
Stochastic Resonance: Noise matters Lateral Inhibition
Single-Channel Kinetics
What's the Point?: A reminder
Acetylcholine Receptor Model Auto-Correlation Compartmental Models Active Currents Neurosim Implementation Implementation
References
$\setCounter{60}$
Contents

Networks

Neurons interact with one another through synapses, as described previously. However, the point of such interactions is usually to produce circuits, or networks of neurons, that in combination can produce an output which is behaviourally relevant to the animal. It is a classic case of "the sum being greater than the parts" - individual neurons have a limited repertoire of output, but a network of neurons may have emergent properties that go beyond what the individual neurons within the network can produce.

This chapter considers the following topics involving network simulations using Neurosim.

 


Central Pattern Generators

Many biological activities involve rhythmic oscillations. For instance, locomotion usually involves 2 phases of activity known as the stance and swing phase, or the power and return stroke. For legged locomotion the stance/power phase is when the leg is on the ground and pushing back, the swing/return stroke is when the leg is lifted and returning forward to its starting position. Similar oscillations are found in many other locomotor activities, including flight and swimming, and also non-locomotor rhythmic activities such as breathing.

Most locomotor rhythms that have been investigated have been found to continue even after the removal of all sensory input. This means that the basic circuitry for generating the rhythm must reside entirely within the central nervous system. These circuits are called central pattern generators (CPGs). Some CPGs have single-cell endogenous bursters at their core (although often there is a pool of such neurons coupled by electrical synapses). However, most CPGs rely on networks of neurons to generate the rhythm.

As far back as 1914, a Scottish physiologist (and mountaineer) called Thomas Graham Brown proposed what he called the “half centre” model. The stance and swing phases were supposed to each be generated by half of a locomotor centre, and the two halves inhibited each other (reciprocal inhibition), thus resulting in a rhythm when the locomotor centre as a whole was excited by a continuous, non-rhythmic excitatory signal descending from some command centre in the brain.

Flip-Flop Circuit

In its essential details, reciprocal inhibition has been found to be at the heart of most rhythmic systems that have so far been investigated. However, there is a crucial element missing from the model in its simplest form – the effective inhibition mediated by each side must somehow weaken with time, or the system locks up permanently in one phase of the other.

Note the circuit contains two neurons which are reciprocally connected by inhibitory synapses. But also note that the drug picrotoxin has been applied, so the synapses are blocked. Both neurons are spiking continuously due to tonic excitatory input from “the brain” (actually, just tonic current input). A small amount of random noise has been added to each neuron so that the spikes are not exactly synchronous.

Two separate brief external excitatory stimuli (the square boxes) are applied to N1 and then N2 in sequence, but at the moment all these do is temporarily increase the spike rate of the receiving neuron. Their purpose will become apparent shortly.

This shows what the neurons do without the reciprocal inhibition. What happens when we enable it?

There may be a few spikes which are exactly synchronousNote that without the noise, the synchronous spikes would continue with each side inhibiting the other at exactly the same time, and therefore with equal effect, until the external stimulus unbalanced the activity. This is really a computer artefact, since there is always some noise in real neurons. with each other, but fairly soon, one side or the other “wins”, and inhibits the other. It is random which side wins – it depends on which spikes first. The inhibition continues until the inhibited neuron receives the brief external stimulus. Then it wins, and inhibits the other.

Sometimes N1 wins initially, sometimes N2. But the losing neuron is flipped into winning when it receives help from the external stimulus. If the excitatory stimulus arrives in a neuron that is already winning, the extra spikes just increase the inhibition onto the losing neuron. Note that you could also flip the state by briefly inhibiting the winning neuron, rather than exciting the losing neuron. You can try this out by adjusting the stimulus parameters if you wish.

The circuit thus acts as an electronic flip-flop (a genuine name for an electronic circuit component) which can latch into either state, but be flipped to the other by a brief extra input. Such circuits may be useful in the nervous system when there are two exclusive behaviour options, but the animal needs to be able to switch between them, perhaps in response to a sudden sensory input. An example might be a switch from forward to backward crawling, if the crawler met an obstacle. (This is just hypothetical – I’m not aware of any evidence for such a circuit.)

Reciprocal Inhibition Oscillator

The system now oscillates! The circuit is exactly the same, except that the inhibitory synapses have been alteredThe synapse properties are set using the Synapses: Spiking chemical menu command to access the Spiking Chemical Synapses Types dialog. You can then select type b: hyperpolarizing inhibitory from the list, and note that the Relative facilitation (near the bottom-right of the dialog has a value of 0.9. Values less than 1 mean that a synapse of that type anti-facilitates (decrements). so that the inhibition decrements (anti-facilitates) with time. The inhibition from the winner thus gets weaker and weaker until the loser can escape from the inhibition and become a winner. Then its inhibition weakens in turn, while the previously weakened inhibition recovers during the time when its pre-synaptic neuron is silent.

Remember: Facilitation and anti-facilitation are frequency-dependent pre-synaptic phenomena in which, when the synapse is activated repeatedly, the amount of transmitter released per spike is modulated; increased during facilitation, decreased during anti-facilitation. The synapse returns to its baseline release rate after a period of inactivity.

Picrotoxin is a drug that blocks many inhibitory synapses, including those in this network. It is a non-competitive GABAA receptor blocker, and thus works on the post-synaptic neuron. It does not affect the pre-synaptic neuron, which releases transmitter as usual.

With the inhibition blocked, the individual neurons revert to tonic activity reflecting their underlying central excitatory drive.

The neurons continue with the tonic activity – they do not return to oscillation.

Question: Why don’t the oscillations return when we take away the drug blocking inhibition? [Hint: remember which synaptic property allowed the oscillations in the first place and where the drug works.]

This should restore the oscillations. Does this fit with your answer to the previous question (or perhaps help you to answer it if you were stuck)?

Take-home message: Oscillations can be generated by reciprocal inhibition between neurons, even when none of the individual neurons within the network has any tendency to oscillate in isolation. However, there has to be some mechanism, such as synaptic anti-facilitation, to prevent one neuron permanently inhibiting the other.

Frequency Control

In vertebrates, locomotion frequency can be controlled by the level of excitation descending from the brainstem mesencephalic locomotor region (e.g. Ausborn et al., 2019). In our simple model this is represented by tonic depolarizing current applied to both half-centre neurons. We can modulate this by adding an additional timed stimulus to the neurons.

We just set up an additional depolarizing stimulus that is applied to both neurons during the mid period of the experiment. This increases the frequency of the spikes within each burst, and the burst frequency itself.

The dialog displays a plot of the instantaneousThe instantaneous frequency of a spike is simply the reciprocal of the interval between that spike and the next. frequency of spikes in N1 against time. The points fall into two clear rows. The upper row shows a "cloud" of points generated by the short time intervals between spikes within the bursts. The lower row of single points is generate by the relatively long time interval between the last spike in a burst, and the first spike in the next burst. Both rows show an elevation in frequency in the middle section, which is when the stimulus was applied.

The increase in frequency within each burst is easily explained by the increased stimulus speeding the rate at which the neuron depolarizes to threshold. The increased frequency of the bursts themselves is slightly more complicated. It is partly caused by the same effect (more rapid depolarization to threshold), but also by an increased rate of synaptic decrement caused by the higher within-burst frequency. Thus the inhibition which suppresses antagonist spiking during the burst decrements more rapidly, so that the antagonist is held below threshold for a shorter period, which in turn allows the switch between half centres to occur more rapidly. The weaker inhibition is apparent in the recording, where the between-burst hyperpolarization is less pronounced in the central section.

Multi-Phase Rhythms

Generating a rhythm is only part of what is needed for locomotion – most movement involves the coordinated activity of several limbs and several muscles within each limb. We only know a little about how such coordinated activity arise, but much research has been done on potential mechanisms.

In legged locomotion there are actually four phases – leg up, leg forward, leg down and leg back. A simple elaboration on the half-centre model can generate this pattern.

This shows a four phase progression which could drive a leg. Note that the neurons in the Setup view are colour-coded according to their membrane potential, so the pattern of activity can be seen in the changing colours during the rhythm.

If you look at the network in the Setup View, you can see that there are actually 2 mutually inhibitory half-centres present, the diagonal pairs N1 and N3 (labelled up-down) and the diagonal pairs N2 and N4 (labelled forward-back). However, these are coupled in a ring of one-way inhibition, such that each element inhibits (and thus terminates the activity of) the neuron that was active in the preceding phase. This one-way inhibition is organized in a clockwise direction, and activity in the network thus progresses in an anti-clockwise direction (N4→ N3→ N2→ N1→ N4 etc.). (You can see this by watching the colours in the Setup view.)

a
4-phase oscillator circuit
b
4-phase oscillator results
A four-phase oscillator. a. The neural circuit. b. The circuit output.

In this 4-neuron network none of the synapses have to decrement in order to get oscillation. Looking at the activity in detail can explain why.

Pick a time when N1 has just started spiking.

N2 was spiking, but it receives inhibition from N1, so it stops spiking and its membrane potential starts to hyperpolarize. N3 was already inhibited, and it too receives inhibition from N1 so it remains hyperpolarized. However, N4 is not being inhibited by N1 because there is no synaptic connection, and N2 and N3, which could inhibit it, are both themselves inhibited and so silent. So the N4 membrane potential starts to recover and depolarize (it was inhibited in the previous phase). Eventually, its membrane potential reaches threshold and it starts to spike. It then immediately inhibits N1, and the rhythm moves on to the next phase of the cycle.

And so it goes on …

Question: Why does removing the picrotoxin restore the oscillations in this network circuit, whereas in the previous circuit ( Reciprocal Inhibitory Oscillator) it did not?

It is quite difficult to understand what is going on in this circuit, but if you think through the network carefully, you can figure out how it works. On the other hand, that is precisely the point of modelling – it would be a bold neuroscientist who would bet money (or their reputation) on exactly what the output of this circuit would be under different conditions, but by modelling we can find out!

Separate Rhythm Generation and Pattern Formation

In the hypothetical Multi-Phase Oscillator circuit the coordinated rhythmic output is an emergent property of the entire network. However, in some systems there is thought to be a separation of functionality into a subset of "clock" neurons whose properties and connections generate a fundamental rhythm, that feeds into a further subset of neurons and/or connections that "tune" the rhythm into a pattern of output that actually produces the required coordinated motor activity (e.g. McCrea and Rybak, 2008).

Laser Photoinactivation and Optogenetics

In a real world investigation of a circuit such as that described above, it would be a great help in understanding what is going on if you could simply remove one or more neurons from the circuit, and see how the output changed. Often, this is simply not possible, but in some preparations and some circuits (and all simulations, of course), it is. There are two main techniques available: laser photoinactivation (Miller & Selverston, 1979) and optogenetics (Nagel et al., 2003).

Laser Photoinactivation

In a real experiment, the first step in laser photoinactivation is to fill the target neuron(s) with a photoabsorptive dye such as Lucifer Yellow or 5(6)-carboxyfluorescein. This can be done either by iontophoretic injection through a microelectrode, which fills just a single neuron and has the advantage of allowing the activity of that neuron to be recorded before it is inactivated, or by wick-filling a cut nerve root, which will fill the whole population of neurons which have axons within that root.

Once the neuron is dye-filled, a laser with the appropriate excitation wavelength is focused on it, and the consequent fluorescence kills the neuron. If only a part of the neuron is illuminated (the axon, or a specific dendritic branch), then only that part is killed, leaving the rest of the neuron functioning normally.

In Neurosim, photoinactivation can be simulated using the Zap facility. There are various ways of using this.

If you want to zap a single neuron after a simulation has started (so that you can see the pre-zap activity pattern):

Once you click the neuron, the Results unfreeze and the simulation continues. But now N4 is effectively dead. Its membrane potential is 0, and it cannot receive or deliver any synaptic input (including through any electrical synapse, if it made such a connection).

The circuit output changes dramatically after N4 is zapped.

Question: Does the change in circuit output fit with your understanding of how the circuit operates (assuming that you did the previous tutorial)?

If you want to zap multiple neurons simultaneously during a simulation run (equivalent to the wick-filled axon mode):

If you want to run a simulation with certain neurons zapped from the outset, right-click the neurons and select Zap from the context menu before you click Start.

Optogenetics

A major technical advance in circuit analysis has been the discovery of genetic manipulations that enable light-activated ion channels originating from bacteria to be expressed in non-bacterial cells such as neurons. Furthermore, the manipulations can be such that the channels are only expressed in a particular lineage of cells within the animal, such as GABAergic interneurons, or motorneurons.

The channels go by the general name of channelrhodopsins (ChRs), and, thanks to genetic engineering, there is now an extensive toolkit of such channels that differ both in the wavelength of light that activates them, and in their ion specificity. In particular, there are chloride-specific channels that, when light activated, will inhibit a neuron, and sodium-specific channels that will excite a neuron.

Two such channels (one of each type) have been implanted into N1 in the Multi-Phase Oscillator circuit.

Question: Do you think the blue light is activating the chloride-specific ChR, or the sodium-specific ChR? (In a real experiment, you would, of course, know which ChR type you implanted, so this question is just to see if you are paying attention :))

Take-home message: Photo-inactivation and optogenetics can be very helpful in figuring out the connections involved in a real neural circuit, where we do not have a convenient circuit diagram available in a Setup view! The optogenetics tool is particularly useful when combined with calcium imaging of neuronal activity, because it does not require any microelectrode or surgical intervention - but of course it does require that a suitable genetic toolkit is available for the preparation on which you are working.

Sensory Feedback

There is ample experimental evidence that central pattern generators really do exist – in many animals the nervous system can generate rhythms in the absence of sensory feedback. However, there is also ample evidence that sensory feedback normally plays an important role. Animals obviously have to adjust a locomotor rhythm in the event of unexpected external stimuli – if you trip up, you will take an extra rapid step to avoid falling on your nose. But also, most rhythms run faster in the intact animal with sensory feedback present than they do when the CNS is isolated from such feedback.

Sensory feedback may accelerate rhythms simply by providing additional general excitation to the nervous system. However, they can also act directly on the pattern generator itself.

In this circuit N1 and N2 at the top are a reciprocal-inhibitory half-centre oscillator like we saw earlier. They drive flexion and extension as labelled in the Setup view. Each half-centre activates a peripheral muscle (N3 and N6), and each muscle activates a peripheral sense organ (N4 and N5). These mediate negative feedback onto their half of the pattern generator. The delay of this feedback has been set to 200 ms, to account for the time taken for muscle contraction and axonal conduction.

The system generates 6 cycles of oscillation in the duration of the simulation.

Curare (the famous South-American poison arrow drug) blocks nicotinic acetylcholine receptors and hence paralyzes muscles and so disables the feedback. So in the presence of curare, the CPG oscillates at its intrinsic, free-running, frequency with no sensory feedback.

The oscillator now only generates slightly more than 4 cycles in the duration of the simulation. It is clearly running slower.

In this case, the cause of the frequency change is that the negative sensory feedback in the intact system is timed so as to truncate each burst of its half-centre driver, thus releasing the other half-centre early from its inhibition. This truncation allows the whole system to oscillate faster.

This is very much a “thought experiment” simulation. It illustrates one way in which sensory feedback could influence CPG frequency, but it is undoubtedly a massive simplification of the way real systems work.

Phase Resetting Tests

The frequency (or period) and phase are key characteristics of an oscillator, and if you do something experimentally that alters either of these, then you know that you have somehow affected the rhythm generation mechanism itself.

Two neurons are visible and both oscillate. However, the program has been set up to hide any connections that might exist. This makes things a bit more realistic, since in a real experiment you cannot normally see connections between neurons (you are lucky if you can even see the neurons!).

So are both neurons endogenous bursters, or is it a network oscillator, or some sort of mixture?

The first part of the sweep is unchanged, but the stimulus induces a large burst of spikes in N1 and N2 (even though the stimulus was only applied to N1), and after that, things change. At times when the neuron was previously spiking it is silent, and when it was silent, it is spiking. In the new sweep the spikes “fill in the gaps” in the previous sweep.

The stimulus forces N1 to produce a burst of spikes earlier than it would have normally, and then after this early burst, the bursts continue at the same intervals that they had previously. The stimulus pulse has thus reset the phase of the rhythm, indicating the neuron 1 is part of the rhythm-generating circuit. The question now is, is N2 also part of the rhythm-generating circuit?

The stimulus now induces an early burst of spikes in N2 but not in N1. Furthermore, this does not reset the phase of the rhythm. To confirm this:

With the two sweeps superimposed, you can see the early burst in N2 produced by the stimulus, but you can see that this has no effect on the rhythm – the bursts continue on afterwards just as though there was no stimulus at all.

This tells us that N2 is a “follower” neuron that does not actually participate in generating the rhythm. It is simply driven by a rhythm that is generated elsewhere – which in this case must be N1, since there is nothing else.

To see the actual circuit:

You can now see that N1 makes a non-spiking excitatory synapse onto N2 (the cyan rounded rectangle a). N1 is indeed an endogenous burster, and N2 is simply following the activity of N1 through its synaptic input. Anything you do to N2 has no effect on N1 because there is no synaptic connexion from N2 to N1.

Tadpole Swimming: A case study

Tadpoles may sound like rather obscure animals for a neuroscientist to study, but in fact the neural mechanism controlling swimming in hatchling tadpoles (charmingly known as polywigglesFrom old English poll = head (as in poll tax) and wiggle = wiggle! in Norfolk!) is one of the best understood vertebrate CPGs (Roberts, et al., 2010). At a behavioural level, swimming can be initiated by a brief sensory stimulus (touch) to one side of the body, and is driven by wiggling the tail in left-right alternating cycles at 10-25 Hz, with a wave that propagates from head to tail. The core mechanism depends on interactions between just two types of interneurons: descending interneurons (dINs) and commissural interneurons (cINs). These occur as separate populations on the left and right side of the animal, but the dINs on each side are coupled to each other, so that they excite each other and essentially act as a single unit. The dINs excite trunk (myotomal) motorneurons on their own side of the spinal cord, so if the left dINs spike, the tail bends to the left, and if the right dINs spike, the tail bends to the right.

DINs are glutamatergic neurons that arise in the hindbrain and rostral spinal cord. Because of their mutual excitation they develop a long-lasting NMDA-receptor mediated depolarization during swimming, and it is this depolarization that maintains swimming for episodes that can last from a few seconds to more than a minute. The dINs also drive the ipsilateralIpsilateral means on the same side, as opposed to contralateral which means on the opposite side. cINs through brief AMPA-receptor mediated EPSPs. The cINs are gylcinergic and feed inhibition back to the dINs, but on the other (contralateral) side of the spinal cord. It is thus the cINs that provide the reciprocal inhibition necessary to produce the alternating swimming rhythm. Unilateral lock-up is prevented by the cellular properties of the dINs, which each only spike once for a given depolarization. In order to spike again, there has to be an intervening hyperpolarization, which is provided by the cIN feedback. Thus during swimming the dIN spikes are triggered by rebound excitation from the IPSPs generated by the cINs.

The following simulations are based on parameters modified from Sautois et al., (2007). We will build up the circuit gradually, so that you can see the contribution of the components at each stage.

Basic dIN and cIN properties

There are two neurons, a single dIN and a single cIN, but there is no connection between them. Each receives stimuli (1 - 3), but these initially have 0 amplitude, so the traces are flat.

The dIN starts to spike when the amplitude reaches 0.06 nA, but it just generates a single spike, even when the stimulus exceeds threshold. As you continue to increase the stimulus, the dIN produces some post-spike oscillations, but it does not generate more than just the single initial spike. This is a rather unusual property for a neuron, but it is characteristic of dINs in the tadpole.

The cIN has a slightly lower threshold than the dIN, and once threshold is exceeded it generates multiple spikes, which increase in frequency as the stimulus strength increases. This is a more normal neural response to stimulation. The cause of the difference lies in the kineticsIf you wish you can double-click a neuron to open its Properties dialog and then examine the voltage-dependent channels. But unless you're really interested you don't need to get into that level of detail if you just accept the properties at a functional level. of the voltage-dependent ion channels in the two neurons.

The dIN, which is depolarized but not spiking at the time of the negative pulse, responds by generating a small spike at the termination of the pulse. This is a case of rebound excitation as demonstrated earlier in the classic HH model, and has the same underlying cause: relief of inactivation of voltage-dependent sodium channels, and closure of open voltage-dependent potassium channels.

Note that increasing the hyperpolarization increases the size of the rebound dIN spike, and blocks spikes in the cIN.

Next we start to connect the neurons:

As before, the upper neuron in the Setup view is a dIN, and it now makes an NMDA-type excitatory connection to itself (the blue diamond labelled c). This synapse represents the re-excitation of the whole population of dINs caused by the reciprocal synapses that they form with each other. The lower neuron is a cIN, and if it spikes, it will inhibit the dIN through its glycinergic synapse (the blue diamond labelled b).

Initially, the cIN is silent, but the dIN spikes in response to stimulus 1. The recurrent dIN excitation is visible as a long-lasting depolarization following the spike.

The core swimming circuit

Now that we have seen the key properties of the individual dINs and cINs and how they interact with each other, we can build a minimalist CPG circuit.

First look at the circuit in the Setup view. There are 4 neurons in total, comprised of a left-right pair of dINs (N1, N3 in the top row) and a left-right pair of cINs (N2, N4 in the bottom row). Each dIN excites itself through a long-lasting NMDA-type synapse (c), and its ipsilateral cIN through a brief (phasic) AMPA-type synapse (a). Each cIN inhibits its contralateral dIN through a phasic glycinergic synapse (b). Of course, each neuron in the model represents many neurons in the real animal, and the whole circuit is replicated many times along the length of the spine. So this is very much a simplified "concept model" of the real situation.

Now look at the Results view. Swimming is initiated by separate stimuli (bottom trace) applied to the left and right dINs, but once initiated it is self-sustaining. We will come back to the initiation later, but for now concentrate on swimming itself.

Swimming is characterised by alternating spikes in the left and right dINs (top and third traces, brown and magenta - each magenta spike occurs betweenOne easy way to check this is to drag the magenta trace up so that it overlays the brown trace. You can then see the relative timing. You can restore the position by resetting the top and bottom axes scales for the 3rd axis back to +40 and -70. two brown spikes). Since each dIN excites motorneurons on its side of the spine, this will generate side-to-side movement of the tail, resulting in swimming in the real animal. Note that the dIN spikes occur on top of a sustained depolarization caused by the recurrent excitation, which is interrupted periodically by IPSPs generated by the cINs (second and fourth traces, green and blue). In contrast, the cIN spikes result from brief EPSPs arising from the resting potential - there is no sustained depolarization.

Now examine the timing of the activity in the 4 neurons, ignoring the start of the simulation (because initiation is a tricky issue that we will return to).

a
tadpole swimming core circuit
b
tadpole swimming results
Tadpole swimming. a. The core circuit. b. A cycle of the swimming rhythm.

The left dIN (top trace, brown) activates the left cIN (second trace, green) with short latency, and this in turn generates a short-latency IPSP in the right dIN (third trace, magenta). It takes time for the right dIN to recover from this IPSP, but when it does, it generates a rebound spike. This then generates a spike in the right cIN (fourth trace, blue), which feeds back to inhibit the original left dIN (back to the top trace). And so the sequence continues. The key element determining the cycle period in this circuit is the recovery time from the IPSP. This depends in part on the properties of the IPSP itself, but also on the voltage-dependent characteristics of the long-term EPSP in the dIN.

Spike-Triggered Display

A useful technique that can emphasise the timing and consistency of synaptic interactions is to use a spike-triggered display (see e.g. Fig 3c in Roberts et al., 2010).

In the Results view:

Multiple sweeps are superimposed, each showing one cycle of swimming. The important point is that the sweeps are all aligned so that the membrane potential of the left dIN (specified as the trigger neuron, 1) crosses 0 (pre-defined as the synaptic trigger level in the neuron properties dialog) at exactly 5 ms (the specified pre-trigger delay) after the start of the sweep display.

Because the left dIN spikes (top trace, brown) are all aligned and the dINs drive the ipsilateral cINs with a fixed synaptic delay, the left cIN spikes (second trace, green) are also aligned. The left cIN spikes cause IPSPs in the right dINs (third trace, magenta), which are aligned, and these generate rebound spikes in the right dINs, which activate the right cINs (fourth trace, blue). And this leads to the next cycle.

The variability in the display is because the swim pattern takes a few cycles to "settle down" after initiation. You can see this by looking selectively at individual sweeps:

You should see that the first few sweeps vary, but after about sweep 6 the pattern stabilizes, and subsequent sweeps are identical.

To get an overview of the "typical" pattern you can look at the average of the sweeps:

The Results view now shows each trace as the point-by-point average of all 25 sweeps. The raw sweeps are shown in grey, while the average is shown as a highlighted trace. The average spike peaks are smaller because of the variation in spike timing in the early sweeps, but the overall pattern of activity is very clear.

Swim initiation

In the simulation above, swimming is initiated by separate stimuli to the left and right dINs, but these stimuli have a 40 ms time delay between them (stimuli 1 and 2 occur at 25 and 65 ms respectively). This is a bit unrealistic. It is hard to image how a single sensory stimulus could drive the two sides with such a long delay between them - tadpoles are very small, and axonal conduction across the body could not take more than a few milliseconds.

What happens if the two sides are driven simultaneously?

The circuit still produces activity, but the two sides spike synchronously (left and right dINs spike together rather than in alternation), and the frequency is more than doubled. This would not produce an effective swimming behaviour. The cause of the synchronicity is that when the two sides are activated at exactly the same time, the crossed inhibition occurs immediately after each dIN spike, rather than with a half-cycle delay. However, this is a metastableLike a knife balanced on its edge - it will tend to fall one way or the other with any random perturbation. condition - if noise is added to the system, then sooner or later it collapses into the more stable pattern of left-right alternation.

The circuit is exactly the same as before, but random noise has been added to each neuron. The pattern (probably) starts in the high-frequency synchronous mode, but sooner or later it (probably) collapses into the alternating mode. You may need to run the simulation several times, since with random noise it is not possible to predict in advance when the switch will occur.

Surprisingly, synchronous activity does occasionally occur in the real animal (Li et al., 2014), but it has no known function and is probably a "mistake". It normally only lasts briefly, before relapsing into normal swimming. It is likely caused by the synchronous activity occupying a similar metastable state as seen in the simulation.

In the real animal it would not be satisfactory to rely on noise to perturb a metastable equilibrium, since the timing is unpredictable. To solve the swim initiation conundrum, we have to bring in another class of interneurons, the ascending interneurons (aINs). AINs are rhythmically active during swimming, but they inhibit ipsilateral neurons in the CPG - in particular, they inhibit the cINs.

The Results view shows good swimming activity, but the Setup view shows that there is only one stimulus applied to the circuit to initiate swimming, so there can be no bilateral difference in stimulus timing.

The Setup view also shows that there are 3 new neurons in the circuit. N5 and N6 are aINs. They receive AMPA-receptor mediated EPSPs from their ipsilateral dIN (which is what makes them rhythmically active), and they make glycinergic inhibitory output to their ipsilateral cIN. The neuron at the top of the Setup view (N7) is a left-hand sensory neuron. It makes bilateral excitatory input to both the dINs and the aINs. This is definitely an oversimplificationFor instance, there are other interneurons interposed between the sensory neurons and the cpg neurons. of the real circuit, but it conveys the essential features.

The key point with respect to swim initiation is that there is a delay in the sensory activation of the contralateral aIN. All synaptic delays in the simulation have been set to 1 ms, except the N7-to-N6 connection, which has a delay of 3 ms. The following sequence takes you through the consequences of this delay:

  1. On the left-hand side (ipsilateral to the stimulus), the aIN (trace, orange) spikes immediately following the stimulus. This left aIN spike is early enough to inhibit the cIN on that side (2nd trace, green), and prevent it from spiking.
  2. This means that the right dIN (4th trace, magenta) does not receive an IPSP immediately after its spike (as it did during synchronous activity above), and does not generate an immediate rebound spike.
  3. On the contralateral side, the extra 2 ms in the activation time of the right aIN (6th trace, dark green) means that the right cIN (5th trace, blue) can "escape" and is not inhibited. Note that the left and right cINs are activated at exactly the same time, but the left cIN receives the aIN IPSP before it spikes (which prevents it from spiking), while the right cIN does not receive the aIN IPSP until after it spikes.
  4. This means that the left dIN (1st trace, brown) does receive an IPSP immediately after its first spike, and its second spike therefore occurs early. However, after this one early double-frequency spike, the rest of the swim episode shows the normal alternating pattern.

This asymmetry in dIN activation essentially replicates the 40 ms relative delay in dIN activation used in the core circuit simulation to initiate swimming, but it does it using only a 2 ms difference in activation time between aINs on the left and right sides of the animal, which is within a plausible biological range.

Take-home message: The swimming rhythm in hatchling tadpoles is primarily driven by spikes in descending interneurons (dINs). These spikes activate commissural interneurons (cINs), which in turn inhibit contralateral dINs. The dIN spikes themselves result from post-inhibitory rebound from this contralateral inhibition, superimposed on an NMDA-receptor mediated background depolarization. The background depolarization is caused by mutual re-excitation between dINs.

Short-Term Motor Memory and the Sodium Pump

Tadpoles swim in episodes that can last from a few seconds to several minutes (although in the simulations above, they last indefinitely). In a real tadpole, an episode can be terminated abruptly by a mechanical stimulus to the cement gland on the head, such as a “nose bumpThe sensory input induced by the stimulus activates GABAergic neurons in the hindbrain, which in turn inhibit the spinal circuitry involved in generating the swimming rhythm.” collision with an obstacle (or even the under-meniscus of the water surface). However, even without such acute sensory inhibition, natural swimming frequency within an episode gradually slows, and eventually the episode self-terminates. If a second swim episode is induced by appropriate excitatory stimulation within a minute or so after the natural termination of the previous episode, the duration of the second episode can be substantially reduced. The shorter the gap between the episodes, the greater is the shortening effect. It thus appears that the spinal circuitry "remembers" its previous activity for a brief period, and this memory affects its subsequent output. This phenomenon has been termed short-term motor memory (STMM: Zhang & Sillar, 2012).

The mechanism underlying STMM in tadpoles is, at least in principle, surprisingly simple. At the termination of an episode of swimming, the membrane potential of most spinal neurons, including cINs, shows a small (5-10 mV) but significant period of extended hyperpolarization. This can last for up to a minute (similar to STMM), with the potential gradually returning back to its pre-swim resting level during this period. Thus, if the second swim episode is elicited during this ultra-slow after-hyperpolarization (usAHP), the duration of the second episode is reduced. If the gap between episodes is short, the usAHP amplitude induced by the first is still relatively large and the STMM effect is pronounced. If the gap is longer, the usAHP amplitude has decreased and the STMM effect is reduced. If the gap is sufficiently long that there has been full recovery from the usAHP, then there is no STMM, and the second episode duration is normal.

So the next question is, what causes the usAHP? This too has a quite simple explanation. During the swim episode the cINs spike repeatedly at up to 25 Hz, and during this period, there is a substantial inflow of sodium ions into the neuron, carried in part through the voltage-dependent sodium channels of the spike itself, and in part through the AMPA-receptor mediated EPSPs that drive the cIN spikes. The sudden influx of sodium overwhelms the constitutive sodium clearance mechanism (the standard Na/K ATPase: the “sodium pump”), so the intracellular sodium concentration starts to rise. This activates a specific alpha-3 sub-type of sodium pump which has a lower affinity for sodium and is normally silent, but is activated by high concentrations of intracellular sodium. This “dynamic” sodium pump, like the standard pump, is negatively electrogenicIt pumps 3 Na+ ions out for every 2 K+ ions it pumps in, leading to a net hyperpolarizing current. and it is this which causes the usAHP. As the excess sodium is cleared from the neuron, the dynamic pump activity declines, and thus so does the usAHP.

A pair of stimuli is applied to the dINs to elicit an episodeNote the very slow timescale, so individual cycles within the episode merge on the screen to appear as a solid block. of swimming. Unlike in previous simulations, the episode self-terminates due to the development of the usAHP. [Note that there are other mechanisms that contribute to self-termination of swim episodes (e.g. Dale, 2002), but they are not implemented in this simulation.]

Tadpole ultra-slow after hyperpolarization
The ultra-slow after-hyperpolarization (usAHP) develops during an episode of swimming, and declines after the episode terminates (arrow, top trace). A horizontal cursor has been placed at the resting potential of the left cIN (green, top trace). The membrane potential of the right cIN (blue, second trace), the stimulus (red, third trace), the dynamic pump current in the left cIN (black, fourth trace) and the intracellular sodium concentration in the left cIN (purple, bottom trace) are shown.

The top trace (green) shows activity in the left cIN during the episode. The vertical scale has been expanded to "zoom in" on the base membrane potential, to make the time course of the usAHP clearly visible. Before the episode the resting membrane potential is -60 mV, immediately after the episode it has hyperpolarized to about -65 mV (peak usAHP), and then it slowly recovers back to -60 mV over about a minute. The second trace (blue) shows the right cIN at a normal scale, so that the spikes are fully visible. The usAHP is identical in this neuron, but less obvious at this scale.

The episode terminates because the increasing hyperpolarization caused by the usAHP eventually causes the dIN-mediated EPSP in the (left) cIN to drop below threshold. This breaks the feedback loop, and terminates the episode.

The bottom trace (purple) shows the intracellular sodium concentration in the left cIN. It rises during the swimming episode, and then falls again after the episode, as the dynamic sodium pump restores the concentration to its resting level. The 4th trace (black) shows the hyperpolarizingThe pump current sign convention follows that of a normal ionic channel, so an upward deflection indicates an outward (hyperpolarizing) positive current. current generated by the dynamic pump, which mirrors the change in sodium concentration, and which is directly responsible for the usAHP.

Implementation details: The circuit is identical to that shown previously, but the voltage-dependent sodium channels and the AMPA receptors in the cINs have been set to have sodium as their carrierAMPA receptors are mixed cation channels, so the sodium component is set at 50% of the total current. ion. This is achieved through the relevant Properties dialogs. In addition, the cINs have a sodium concentration-dependent electrogenic sodium pump, which generates the usAHP. The dINs are unchanged from the previous simulations. (They do in fact have a putative usAHP, but this is masked by a hyperpolarization-activated current Ih (Picton et al., 2018) and so is ignored in this simulation.) Note that quantitative parameters determining sodium concentration and pump rate are heuristic - there is no physiological evidence for the values in this system .

The Gauss stimulus option has been set up to deliver two episode-initiating stimuli to the dINs, with the second occurring about 7 s after the termination of the first episode. The second pair of stimuli successfully initiate another swim episode, but the duration of this second episode is much shorter than that of the first. This is short-term motor memory!

The decline in cycle frequency within each episode, and the shorter duration of the second episode, are clearly visible.

The cause of the reduced duration of the second episode is fairly obvious from the sodium pump current trace (black, fourth trace). At the time of the start of the second episode the pump current has only slightly declined from its peak level, and so as more sodium floods into the cell (purple, bottom trace) the pump current soon returns to the level that was sufficient to render the cIN EPSP sub-threshold, and thus to terminate the second episode.

The gap between the end of the first episode and the start of the second is now longer and the dynamic pump has had more time to clear sodium from the cell, and so its current level is reduced. It therefore takes longer to return to the peak level that blocks the cIN EPSP, and the duration of the second episode is extended, although still considerably shorter than that of the first.

Take-home message: A dynamic sodium pump in cIN neurons generates an ultra-slow after hyperpolarization (usAHP) in response to the increase in intracellular sodium concentration that occurs during a swim episode. If a second episode is initiate before the usAHP has decremented to rest level, the duration of the second episode is reduced in a gap duration-dependent manner. This mediates a short-term motor memory (STMM).

Finally, it should be noted that there is increasing evidence that sodium pump-induced usAHPs are widespread in the nervous system of many animals. In some cases they may mediate STMM as in the tadpole, in others they may have different functions, such as protecting from excitotoxicity-inducing hyperactivity in hippocampal neurons (see Picton et al., 2017, for references).

Synchronization and Entrainment

People who study oscillators (particularly physicists) distinguish between synchronization and entrainment. Synchronization occurs when independently-rhythmic entities interact with each other bi-directionally to produce a coordinated system response. Entrainment is when the interaction is one-way. Thus our circadian clock is entrained by the external day-night light cycle (our internal biological clock does not affect the rotation of the earth!), but the multiple neural oscillators in the suprachiasmatic nucleus that maintain our clock synchronize each other by mutual interaction.

Entrainment

The single neuron shows an endogenous rhythm. It does not have full H-H spikes, but that doesn’t matter for our purpose, what matters is that it oscillates at a fixed frequency which is determined by its intrinsic properties.

A repetitive stimulus now occurs with a frequency which is slightly faster than the endogenous rhythm. The stimulus continuously “pulls” the oscillation forward, so that on each cycle it occurs slightly earlier than its endogenous frequency. This is an example of entrainment, and the stimulus could be called a forcing stimulus.

With the default settings, the dialog displays the instantaneous frequency of spikes in the Results view. A "spikeIt doesn't have to be an actual spike. Any oscillation that goes above threshold and then drops below threshold is regarded as a spike." is regarded as occurring during a time period in which the membrane potential goes above the red cursor. Instantaneous frequency is the reciprocal of the time interval between the onset of spikes, so there is one fewer frequency value than there are spikes in the display. Various other analysis options are available within the dialog, but they are not used in this activity.

The row of dots across the dialog screen shows the instantaneous frequency of the oscillation.

This is the frequency of the oscillations in sweep 1 (the sweep ID is shown at the top-left of the dialog), which is the endogenous rhythm without the forcing stimulus.

How far can a forcing stimulus push a natural rhythm to change its endogenous frequency? The answer depends entirely on the properties of the stimulus and of the mechanism generating the natural rhythm. But we can explore this a bit in the current simulation.

The first two sweeps exactly repeat the previous experiment, but the third sweep attempts entrainment with a weaker forcing stimulus. The frequency graph should now look like this:

rhythm entrainment
Entrainment: a frequency vs time graph showing 3 concatenated sweeps. On the left is the endogenous frequency. In the centre a forcing stimulus entrains the rhythm to a higher frequency. On the right a slightly weaker forcing stimulus periodically fails to entrain.

The weaker stimulus attempts to increase the rhythm frequency, but periodically fails. This can be seen more clearly by looking at the individual sweeps.

To see what is actually happening we need to zoom on the relevant section in the Results view.

You can see that with the stronger stimulus (sweep 2) every cycle of the rhythm is entrained: there are 7 stimulus pulses visible in the display and 7 spikes, so each stimulus is followed by a spike. However, with slightly weaker stimulus (sweep 2) there are still 7 stimulus pulses, but only 6 full spikes. The rhythm "skips a beat" on the 4th visible stimulus.

Take-home message: A periodic forcing stimulus can entrain an endogenous neural oscillator to a different frequency, but only within certain range.

Synchronization

Most real oscillator circuits involve a pool of neurons with quite similar properties, rather than just single neurons. This makes the circuit more robust, since individual neurons in the pool can be damaged or destroyed without seriously impairing the circuit as a whole. However, there has to be some way of synchronizing the neurons, and this is typically accomplished by electrical coupling between local neighbours. This enables a neuron which is oscillating too rapidly to be “pulled back” by current drain into its more restrained neighbours, while one which is going too slowly will be helped along. This process is called synchronization.

WARNING: The following simulation involves flashing multi-coloured lights. If you could be adversely affected by this sort of visual stimulus, you should use Neuron: Colours: Edit colour map to change the map to monochrome, or remove the colours entirely by unchecking Colour from voltage.

There is a 10 x 10 matrix of neurons, each of which is a non-spiking endogenous burster. The bursters have identical properties and they start off synchronized, but each has a substantial amount of membrane noise which will randomly perturb its rhythm. In the Setup view the neurons are colour-coded by their membrane potentials, and 4 neurons from the circuit are shown in detail in the Results view (the arrangement of 2 per axis is just to aid comparison).

Each neuron is connected to its neighbours by electrical synapses. For clarity these have been hidden, but they can be revealed by deselecting the Connexions: Hide connexions menu option. BUT, in the default configuration as loaded, all the electrical synapses are blocked by the drug NEM (n-ethylmaleimide, a gap-junction blocker).

Because the electrical synapses are blocked, each neuron free-runs at its own intrinsic rhythm, and due to the random noise, they rapidly desynchronize.

This unblocks the electrical synapses, and the neurons rapidly synchronizeI was once lucky enough to see synchronized firefly flashing in the North Georgia mountains while on a family holiday. The similarity in visual appearance to the output of this simulation was striking, and the underlying mechanism of nearest-neighbour coupling is probably similar.. As soon as one neuron enters its depolarized phase, it “pulls” its neighbours along after it, and a wave of depolarization sweeps across the whole network. Over time, the pattern of this wave can change, since the membrane noise means that it will not always be the same neuron that takes the lead role.

Spike vs Time mode

An alternative visualization can be achieved in the Spike vs Time mode.

Metachronal Rhythm

In the previous simulation, all the oscillators had the same intrinsic frequency. However, this does not have to be the case.

We have 5 neurons linearly coupled by electrical synapses into a nearest-neighbour chain, although as before, the synapses are initially blocked by NEM.

Each neuron is an oscillator, and can be thought of as representing the segmental CPG in a chain of ganglia such as might control, for instance, the legs of a centipede. The CPGs are not identical. The intrinsic frequency follows a segmental gradient, with N5 (the most posterior) being the fastest, and each segmental homologue being slower as they ascend rostrally in the chain. With the coupling blocked, each CPG free-runs at its intrinsic frequency.

After a short while, the coordination appears chaotic. Each neuron is oscillating at its own frequency, and the peaks drift past each other as the simulation progresses, producing an apparently random pattern of colour changes in the chain of neurons.

After a few cycles, the CPGs now become synchronized. However, N5 always leads the rhythm, with the other segments following in sequence. Thus a metachronal rhythm is generated, in which a wave of excitation sweeps rostrally through the chain.

With the central oscillator removed from the circuit, the two ends now oscillate independently. The posterior end oscillates at relatively high frequency, driven by N5 as its pacemaker. The anterior end oscillates more slowly, with N2 acting as its pacemaker.

This very simple model uses just one of the many mechanisms that can produce a metachronal rhythm. However, it can generate testable hypotheses. It predicts that if you remove the central oscillator, perhaps surgically, or perhaps, more reversibly, by pharmacological intervention, the two ends will continue to produce coordinated oscillations, but at different frequencies. If you did that in a real system and found, for instance, that the back end continued to oscillate but the front end shut down, then this means that your system must be using a different mechanism to this model.

 


Stochastic Resonance: Noise Matters

Noise, in a signal processing context, refers to random fluctuations in a signal which do not have any information content relevant to the signal itself. Noise is usually thought of as a bad thing because it corrupts the signal - the receiver has no way of knowing which fluctuations in a signal are noise, and which are the information of interest. However, in some contexts, noise can actually be a good thing. In particular, in sensory systems, noise can enhance the sensitivity of the receiver to small signals through a process known as stochastic resonanceStochastic resonance theory was originally developed specifically for oscillating signals, but the term is now generally taken to include any situation in which noise enhances the performance of a non-linear signal processing system (McDonnel & Abbot, 2009). .

The Setup shows two spiking sensory neurons (N1 and N2), receiving an identical stimulus input. The two sensory neurons are themselves identical, except that N2 has added noiseThe noise is generated by random current fluctuations following an Ornstein-Uhlenbeck distribution. This is a good approximation to noise generated by the random opening and closing of ion channels (Linaro et al., 2011). . The default stimulus amplitude is quite small (0.08). This generates a voltage response in N1 which is definitely below threshold, so N1 does not spike. It is also very unlikely that N2 spikes (unless the noise takes an exceptionally positive random value), so the stimulus is undetected by either sensory neuron.

The Options: Run on change menu toggle has been pre-selected, so a new simulation runs when you change the stimulus amplitude. However, the stimulus is still quite small and it is unlikely to generate spikes in the sensory neurons (it certainly won’t in N1, it probably won’t in N2). Note that the purple bars in the frequency graph are both at or close to zero.

As the stimulus amplitude increases, you should start to see occasional spikes in N2, which is the sensory neuron with added noise. The frequency of these spikes increases as the stimulus strength increases (note the N2 purple bar rises higher in the frequency graph). However, N1 remains silent even though it receives the same input.

Remember that N1 and N2 are identical apart from the noise, and so have exactly the same spike threshold. However, as the membrane potential in N2 gets close to threshold, the added noise occasionally lifts it above threshold, hence the spikes. N1 remains silent because without noise its threshold is never reached for these stimuli.

The stimulus now finally crosses threshold in N1, which consequently generates spikes. However, the spike frequency in N1 immediately jumps to about 30 Hz – there is no graded low-frequency response like there was in N2.

Both neurons now respond with approximately the same increasing spike frequency as the stimulus strength increases. Thus the noise has not reduced the coding capability of N2 relative to N1 in terms of the average spike frequency above N1 threshold, although the fine-timing capability is reduced due to the increased uncertainty in the exact moment at which threshold is crossed.

Take-home message: The noise increases the sensitivity of N2, so that it can respond to weaker stimuli than N1, even though they both have the same absolute spike threshold. Furthermore, the noise increases the dynamic range of N2, so that it is able to code the weaker stimuli with lower frequency spikes.

A key requirement for the occurrence of stochastic resonance is that the detecting system should be non-linear. The sensory neurons are highly non-linear because they generate spikes. When the input signal is below threshold they have zero output, when it is above threshold they generate spikes in which the input amplitude is coded by the output frequency. It is this non-linearity that causes noise to benefit signal detection. (If the neurons were non-spiking and their output was mediated by non-spiking synapses, then the noise would simply contaminate the output and be of no benefit whatsoever.) Of course, even in a non-linear system there is a limit to the benefit – if there is too much noise the signal will just become lost in the noise. In fact, for time-varying signals there is usually a single optimum noise level, which is why “resonance” is part of the name originally coined for the process.

Finally, it is worth pointing out that some level of noise is absolutely inevitable in any biological system, so all spiking sensory neurons will benefit from stochastic resonance to some extent. However, I am not aware of any evidence for specific biological adaptations tuning the level of intrinsicIt has, however, been shown that adding noise to an external stimulus can enhance its detection through stochastic resonance (e.g. Levin & Miller, 1996). noise in sensory neurons to enhance their detection capability.

Dithering in visual processing

In human-made systems, audio and video engineers frequently add noise to electronic analog-to-digital (AD) conversion circuitry specifically to induce stochastic resonance. This is called "dithering". Wikipedia gives a fascinating account of the origin of the term:

"…[O]ne of the earliest [applications] of dither came in World War II. Airplane bombers used mechanical computers to perform navigation and bomb trajectory calculations. Curiously, these computers (boxes filled with hundreds of gears and cogs) performed more accurately when flying on board the aircraft, and less well on ground. Engineers realized that the vibration from the aircraft reduced the error from sticky moving parts. Instead of moving in short jerks, they moved more continuously. Small vibrating motors were built into the computers, and their vibration was called dither from the Middle English verb "didderen," meaning "to tremble." (quoted from Pohlmann, 2005).

The importance of dithering in a biological context can be illustrated with a simple model of the early stages of the mammalian visual system.

The retina forms a 2D map of visual space, and the detection and early processing of the visual signal within the retina is carried out by non-spiking neurons (for which stochastic resonance is of no benefit). However the output from the retina is carried by spiking neurons - the retinal ganglion cells (RGCs) that project to the lateral geniculate nucleus (LGB), where the receiving neurons also form a 2D retinotopic map. A ganglion cell can thus be regarded as a 1-bit AD converter - if its input is above threshold it spikes and the signal is detected and transmitted to the LGN, but if it is even slightly below threshold, it does not spike and the signal is undetected and no transmission occurs. Dithering enables such just-below signals to be detected.

The Setup view shows two rectangular (10 x 11) blocks of neurons, both of which form a 2D map of visual space. The top block represents RGCs, while the bottom block represents LGN neurons. Each RGC neuron makes an excitatory synaptic connection to its equivalent LGN neuron (e.g. the top-left RGC connects to the top-left LGN neuron, etc.), but these connections are hiddenYou can see them by toggling the Connexions: Hide all connections menu command. to avoid a very confusing Setup view.

The retina receives a visual stimulus that switches between two different patterns, as set by the list of target neuronsIt is difficult to interpret the patterns from the list of numbers of stimulus target neurons, but don't worry, it will be revealed once we introduce dithering! of the two stimuli. In the Results view you can see that N 1 (top left in the RGC block) is not stimulated by either pattern and is silent throughout. N13 (2nd row, 3rd column in the RGC block) is stimulated in both patterns and shows two periods of depolarization. However, the stimulus strength (0.1 nA) is set just below RGC threshold, and the RGCs do not spike and the LGN does not detect the stimulus.

In the Results view, the RGC traces (N1 and N13) both show the noise dither, and in N13 this combines with the visual stimulus to periodically take the neuron above threshold. The RGC spike causes an EPSP in the paired LGN neuron (N123), which also spikes. In contrast, N1, which does not receive a visual stimulus, does not spike and its paired LGN neuron (N111) remains silent. The two patterns of visual stimuli now become obvious in the colour coding of the membrane potential of the LGN

Note that the persistent colours are caused by the relatively long duration of the EPSPs. The spikes also cause a colour change in both the RGCs and LGN neurons, but this is so brief that it is barely perceptible.

What happens if we increase the strength of the dithering noise?

Now, very occasionally the noise probably takes an unstimulated RGC above threshold, and a spurious response occurs in the paired LGN neuron.

Now, unstimulated RGCs spike more frequently, and the "correct" pattern is hard to discern in the LGN colour map (although it is still obvious in the different spike frequency of the inside-pattern and outside-pattern LGN neurons N111 and N123).

As is the case in other aspects of real life, it is evident that too much dithering is a bad thing!

Threshold variability

It is almost inevitable that in an array of real biological neurons there will be some variability in the individual thresholds, even if the neurons all nominally belong to the same class. Could this variability produce a similar effect to dithering?

The menu command applies a small but constant random variation to the threshold of each RGC neuron (the value was pre-configured and stored in the parameter file, but is only used when the toggle is applied).

The stimulus is now sufficient to take some of the RGC neurons above their (reduced) threshold, so a response is visible in the corresponding LGN neurons. There are two different response patterns corresponding to the two different stimuli, but in neither case can the overall stimulus pattern be adequately reconstructed from the small number of activated LGN neurons. If this was all the information that the visual system received, the animal would know that something had happened within its field of vision, but would have a completely wrong impression of what had happened.

Now an accurate representation of both patterns is clearly visible in the LGN response, once again demonstrating the value of dithering.

 


Lateral Inhibition

From an evolutionary perspective, paying attention to changes in the environment is probably more important than focusing attention on things that just carry on without change. The change can be something in time – the sudden noise that alerts you to the presence of danger, or in space – the visual line that demarcates the edge of a narrow path with a steep drop on one side. So it is not surprising that the nervous system is especially tuned to detect such changes.

One well-known mechanism for emphasizing edges in a spatial field is lateral inhibition. This occurs in spatially-mapped senses such as the visual system, and it refers to the ability of a stimulated neuron to reduce the activity of neurons on either side of it.

Lateral inhibition
Lateral inhibition. In a one-dimensional array each neurons inhibits the neurons beside it. In a 2-D array, neurons would inhibit their nearest neighbour.


This is a concept simulation loosely based on the vertebrate retina. The top row is an array of spatially-mapped non-spiking receptors (or bipolar on-cells in the retina). Each neuron in the row inhibits those on either side of it through a non-spiking chemical synapse, thus mediating lateral inhibition. (The connections have been hidden for clarity, but can be revealed by deselecting the Connexions: Hide all connexions menu option.) The middle part of the array (N10 - N20) will receive a brief stimulus delivered through the square boxes when you run the experiment.

The lower row is an array of spiking interneurons (perhaps ganglion cells in the retina). Each interneuron is activated by its partner receptor in the row above through a non-spiking excitatory chemical synapse. The interneurons have some added membrane noise to enhance their sensitivity through stochastic resonance.

The Results view shows activity in the receptor layer, but the Display mode has been set to Voltage vs Neuron. This means that the X-axis represents the individual neurons in the receptor array (N1 - N30), and the Y axis represents the membrane potential of each of those neurons. The whole display evolves over time, before stabilizing, at which point dots are drawn to show the potential of each neuron.

The response shows the clear edge-enhancement produced by lateral inhibition. If you hover over the two "cat's ear" peaks, the status bar shows that they are generated by N10 and N20, which are the two receptors just on the inside edge of the stimulus, while the adjacent troughs are generated by N9 and N21, which are just on the outside edge. The network thus differentially amplifies the response at the transition from unstimulated to stimulated receptors, compared to the stable conditions within the “body” of the stimulus, although there is still a clear difference in activity level in that region too.

Task: Think through how lateral inhibition produces the edge-enhancement that the simulation demonstrates. Explain it to a friend!

With the picrotoxin applied, the receptor array response is simply proportional to the strength of the stimulus – there is no edge-enhancement.

It is worth noting that lateral inhibition reduces the overall within-stimulus activity level compared to what it is without inhibition, and also reduces the difference in activity between the unstimulated response and the response in the “body” of the stimulus. However, the enhancement in the difference seen at the edge seems to be a worthwhile trade-off, given that lateral inhibition has been found in many sensory processing systems in many animals.

What happens in the spiking interneuron layer?

The Result view now shows a raster plot of the spikes in the interneuron layer (N31 - N60), with each dot representing a spike.

It is very clear that N40 and N50 have a strongly enhanced spike rate during the stimulus. These are the interneurons that receive their input from the receptors just on the inside edge of the stimulus. The interneuron within the "body" of the stimulus spike occasionally, but at a much lower rate than those just inside the edge.

To see the membrane potential responses of individual neurons:

The Results view shows the activity of a receptor and its paired interneuron outside of the stimulus (N5, N35; top two traces), just on the inside edge of the stimulus (N10, N40; middle two traces) and within the body of the stimulus (N15, N45; bottom two traces). Note that the spike responses will be variable due to the membrane noise, particularly in N45 which is close to threshold during the stimulus.

Take-home message: Lateral inhibition increases the perceived contrast at the edge of a spatially-mapped stimulus.

 


Pre-Synaptic Inhibition

One advantage of chemical synapses for communication between neurons is that they can be highly plastic - the strength of the connection can often be modulated, both on a long-term basis (e.g. long-term potentiation; LTP), and also on a moment-by-moment basis. One of the key mechanisms underlying the latter is pre-synaptic inhibition.

Standard post-synaptic inhibition is a familiar phenomenon - IPSPs impinge on a neuron and counteract the effect of any EPSPs occurring in the same neuron. Pre-synaptic inhibition is different - it occurs when an inhibitory neuron targets the release terminals of the excitatory pre-synaptic neurons that are delivering the EPSPs to the post-synaptic neuron, and prevents or reduces the release of transmitter. Pre-synaptic inhibition thus directly reduces the size of message impinging on the post-synaptic neuron, rather than merely counteracting its effects after it has already arrived.

Pre-synaptic inhibition is well established as a key mechanism for gating the flow of sensory information into the CNS. It would be quite difficult to stop a sensory neuron from actually responding to a peripheral sensory stimulus - this might require sending an inhibitory axon all the way to the periphery and inhibiting the sensory neuron at its transduction site. Post-synaptic inhibition could prevent a neuron from responding to sensory input, but it would also prevent it responding to any other input arriving at the same time. If an animal needs to block sensory input from a specific source but leave it responsive to other inputs, the answerThis is, of course, a totally post hoc argument - there may well be other answers, and evolution often comes up with solutions to problems that do not seem to be the most efficient way of doing things. is to allow the sensory neuron to respond as normal, but to stop it from making output in the CNS, and thus prevent it from having any effect.

pre-synaptic inhibition

Pre-synaptic inhibition. A sensory (afferent) neuron receives GABAergic pre-synaptic inhibition that reduces its transmitter release and prevents activation of the post-synaptic motor/interneuron.

Primary Afferent Depolarization (PAD)

Pre-synaptic inhibition of afferent neurons in the vertebrate spinal cord is mediated by the release of GABA from inhibitory interneurons, which activates GABAA receptors in the afferent terminals. This leads to an increase in chloride conductance in the terminals, and a consequent IPSP. However, there is an unusually high concentration of Cl- within the terminal itself (due to the sodium-potassium-chloride co-transporter NKCC1), and so the equilibrium potential for Cl- is depolarized relative to resting potential. The IPSP is therefore depolarizing, as discussed previously in the Synapse part of the tutorial. This is called primary afferent depolarization (PAD) (Engelman & MacDermott, 2004).

Depolarizing inhibition is highly counter-intuitive at first sight, and yet it is ubiquitous in sensory gating, and quite common elsewhere too. So how does it work, and what are its advantages?

The Setup view shows a sensory (afferent) axon snaking its way into the spinal cord. The axon is simulated as a compartmental model implementing HH-type spikes. A spike is initiated in the peripheral segment (N1, at the top in the Setup view, red) by sensory stimulus 1. The peripheral spike is shown in the Results view top axis as the red trace. The spike propagates along the axon into the CNS, where after a short delay it arrives at the terminal output segment (N14 in the Setup view, green; Results view top axis green trace).

To make this clear:

You should see the spike propagating in the axon as a series of colour changes.

Within the CNS the sensory neuron makes an excitatory synaptic connection to a post-synaptic neuron (N15, blue; this could be a motorneuron or an interneuron). The EPSPTo make the EPSP sensitive to the shape of the pre-synaptic spike, it is generated by a non-spiking synapse with a high threshold for transmitter release. is visible in the Results view 3rd axis (blue trace). For clarity, the post-synaptic neuron N15 has been made a non-spiking neuron, but the EPSP is quite large and could well elicit a spike if the neuron were capable of generating one.

The last segment of the afferent axon receives pre-synaptic inhibition from an interneuron (N16, orange), but this is not activated in the default situation (no spike is visible in the lower axes of the Results view), so there is no inhibition.

Several changes are visible:

  1. There is a spike in the pre-synaptic inhibitor neuron (orange trace), which is timed to occur just before the afferent spike arrives in the terminal segment of the afferent axon.
  2. A depolarizing potential is visible in the membrane potential of the terminal segment (green trace, top axis) which starts before the spike arrives in that segment. This is the PAD generated by the pre-synaptic inhibitor.
  3. The afferent spike in the terminal segment is reduced in amplitude and duration compared to the spike without PAD.
  4. The EPSP in the receiving neuron (blue trace) is considerably reduced in amplitude - this is the result of the PAD-induced pre-synaptic inhibition.

Inhibition Mechanism

Why does the PAD generate inhibition? This is still a matter of some debate, but there are generally thought to be 2 effects at play.

  1. Shunting: The increased chloride conductance in the afferent terminal will act as a current shunt, which increases the leak of the spike-induced depolarizing current, and hence reduces spike amplitude. However, this would also occur with a hyperpolarizing IPSP, so it does not explain why depolarization is so common.
  2. Sodium inactivation / potassium activation: The depolarization starts to inactivate voltage-dependent sodium channels and activate voltage-dependent potassium channels just before the afferent spike invades the terminal region. This will reduce the peak spike amplitude and duration.

Either mechanism would reduce the activation of voltage-dependent calcium channels in the pre-synaptic terminal and hence the inflow of calcium, thus reducing transmitter release.

At this stage you should have 2 sweeps visible on the screen, the first showing the blue EPSP without pre-synaptic inhibition, the second showing it with the inhibition. (If you have cleared the screen, repeat the experiments above to get back to this stage.)

A third simulation runs, but this time there is no PAD (the green trace is indeed flat until the spike takes off). Remember that the inhibitory synapse is still being activated, and therefore there is still a GABA-induced increase in chloride conductance, but there is no voltage change in the afferent terminal. So the shunting effect (1 in the list above) is still present, but there is no effect on the voltage-dependent channels (2 in the list above).

This generates an EPSP in the post-synaptic N15 (blue trace) that is smaller than the full-sized EPSP (sweep 1), so there is some inhibition, but larger than the EPSP generated with PAD (sweep 2), so the inhibition is not as effective. The pre-synaptic spike height is also intermediate between the other two situations.

Take-home message: Shunting alone produces some degree of pre-synaptic inhibition, but the inhibition achieved with PAD in combination with shunting is considerably more effective than that generated by shunting alone. [But note that this is a concept simulation and the parameters are not derived from experimental data, so the balance may vary in different real preparations.]

Question: What happens if you generate a hyperpolarizing IPSP?

There is now hardly any inhibition - the EPSP is almost the same size as the full-sized EPSP without inhibition. It appears that pre-synaptic hyperpolarization actually works against producing post-synaptic inhibition, presumably by relieving some of the sodium inactivation and potassium activation that occurs naturally at the resting membrane potential (see the Rebound Excitation tutorial).

Antidromic Spikes and the DRR

One of the weird consequences of PAD is that sometimes the depolarization can be enough to generate spikes in the sensory neuron, without any sensory stimulation occurring!

There is no initial spike in the periphery of the afferent (red trace, top axis) because you have turned off the sensory stimulus (1), but the central inhibitory interneuron is still stimulated (2) and so the PAD occurs as a visible depolarization of the afferent central terminal (green trace, top axis).

The PAD is now slightly larger, and the depolarization is sufficient to generate a spike in the central terminal of the afferent (green trace). This antidromicIt is well known that under experimental conditions axons can propagate spikes in either direction. Spikes that travel in the normal direction are called orthodromic spikes, while spikes that travel in the opposite direction to normal are called antidromic spikes. spike propagates backwards down the axon, out to the periphery N1 (red trace), but it also generates a small EPSP in the post-synaptic neuron N15 (blue trace).

One might think that generating extra spikes in afferents would be the precise opposite of what was wanted to achieve inhibition. However, the PAD-generated spikes in the central region are of reduced amplitudeThe chloride equilibrium potential in a real sensory neuron may be above threshold, but it is always considerably below the sodium equilibrium potential. This means that the increased chloride conductance will tend to counteract the increased sodium conductance that normally determines peak spike amplitude., just like orthodromic spikes affected by PAD, so the EPSPs are also reduced compared to those generated by orthodromic spikes without pre-synaptic inhibition. So although there is some extra excitation, it may not have much effectAs stated before, this simulation is not based on quantitative data. It would be quite easy to change the simulation parameters so that pre-synaptic inhibition eliminated the EPSP entirely, but for demonstration purposes the phenomenon is clearer if some residual excitation is allowed to remain. on the post-synaptic neuron. Also, the antidromic spikes will collide with any orthodromic spikes coming up the same axon at that time, and prevent them from reaching the CNS, so this may actually contribute to inhibition (although the probability of such collision is likely to be quite small).

Antidromic spike generation as a result of PAD is quite common in real preparations because the chloride equilibrium potential in vertebrate sensory neurons is often above their spike threshold. One situation that can generate antidromic sensory spikes is massive sensory stimulation, such as that caused by the pain response to damaged tissue. The resulting PAD can be so large that it causes a volly of antidromic sensory spikes known as a dorsal root reflex (DRR). There is some evidence that this antidromic discharge in nociceptors may cause release of peptides from the peripheral afferent endings, which may in turn exacerbate inflammation of damaged tissue. There is thus the possibility of a pathogenicOf course, the inflammation may increase a behavioural "guarding response" that helps protect against further damage, and may even have beneficial anti-bacterial effects. We all hate pain, but sometimes it may be good for us! positive feedback loop (Lin et al., 1999).

 


The Jeffress Model for Auditory Localization

Many animals, including ourselves, are remarkably good at localizing where a sound comes from, but the overall champion is probably the owl (reviewed in Sillar et al., 2016, chapter 3). These birds can not only accurately determine the spatial origin of the faint rustle of a mouse crossing the floor, they can do it in complete darkness!

For horizontal (azimuth) localization, the owl uses the difference in the time of arrival of sound at its two ears. If the origin is straight ahead (or behind) the sound will arrive at the left and right ear simultaneously. If the sound comes from the right, it will get to the right ear first, and if it comes from the left, it will get to the left ear first. However, owls have quite small heads and sound travels rapidly, so the interaural time difference (ITD) is only in the order of microseconds. How can the nervous system measure time differences which are that small?

The Jeffress model (Jeffress, 1948) describes a neurocomputational mechanism for measuring such very small time differences. It was originally completely hypothetical, but there is now strong evidence that a Jeffress-like mechanism operates in the nucleus laminaris in birds (Carr and Konishi, 1988), and may also operate in the medial nucleus of the superior olivary complex in mammals, although this is more debatable (Grothe et al., 2010).

[There is a short animated tutorial on this topic which you may want to look at before running the following simulations.]

This represents an idealized version of the Jeffress model. The network has two auditory neuronsIn the owl these are relay interneurons arising from the magnocellular nucleus within the cochlear nucleus., N1 and N2, driven by sensory input from the left and right ear respectively. These neurons each have a single axon that passes across a linear array of neuronsIn the owl these are located in the laminar nucleus of the brainstem. (N3 – N7) in a coincidence detection (CD) layer (the reason for the name will become apparent shortly). The auditory axons approach this layer from opposite ends. As they pass each coincidence detector, each axons emits a branch that makes an excitatory synaptic connexion to that detector. This means that the axon path length, and hence the synaptic delayNote that this delay is due to the fixed conduction velocity of the axon, it has nothing to do with the delay associated with the synapse itself., of each connections varies. In the circuit layout in the Setup view, connections with short paths have short delays, and connections with longer paths have longer delays. To be specific, auditory neuron N1 connects to coincidence detector N3 via a short axon path with a delay of only 1 ms, but it connects to coincidence detector N7 via a longer axon path with a delay of 3 ms.  Similarly, N2 connects to N7 with a delay of 1 ms, and to N3 with a delay of 3 ms.

Jeffress mechanism
The Jeffress mechanism. Inter-aural time difference is detected through a combination of delay lines and coincidence detection.

Imagine that a sound pulse is generated at the moment you click the button, and that it originates from straight ahead. The sound propagates to the ears, and arrives at both ears simultaneously, 4 ms later. This is simulated by the stimuli applied to N1 and N2. These both have a delay of 4 ms, and have been set to each generate a single spike in their respective auditory neurons. These are visible in the top 2 traces in the Results view (at reduced gain).

The spikes conduct to the CD layer, but the spike from the left ear arrives in N3 before the spike from the right ear, because of the different delay lines. The EPSPs summate, but not completely because they are not coincident. The same is true for N4, although the summation is greater.

However, at N5 the spikes arrive simultaneously, and the EPSPs are coincident and they summate completely, and the neuron spikes. A spike in N5 of the CD layer is interpreted by the higher brain processing mechanisms as indicating that the sound came from straight ahead (or behind – the mechanism does not solve that ambiguity).

The sound now arrives earlier in the right ear (at 3 ms) than the left (still at 4 ms), indicating that the sound came from the right.

Now N5 is silent but N4 spikes. The brain now knows that the sound came somewhat from the right!

Task: Try various combinations of sound stimulus delay for both stimuli (only change in 1 ms steps for this simulation). See if you can predict which neuron will spike in the CD layer (if any) for your chosen combination of delay.

Take-home message. The Jeffress mechanism relies on two key circuit features. First, an ordered map of delay lines projecting from left and right receptors to the coincidence detection layer. Second, the neurons in that layer do what the name suggests, they act as coincidence detectors. They have brief EPSPs and rapid time constants, and they only spike if there is complete temporal summation of their inputs.

Phase Ambiguity

We will now look at a slightly more sophisticated implementation of the Jeffress mechanism. This is more realistic, although still a long way from the complexity of a real nervous system.

The circuit is very similar to the previous one, but there are more neurons in the CD layer and the stimuli are different. The same rules apply regarding synaptic delay.

The Results view shows the two auditory neurons (N1 and N2). This time the sound stimulus is a continuous pure-tone sine wave with a period of 3 ms, giving a frequency of 333 HzThis is somewhat above middle C on a piano, but below the Stuttgart tuning pitch of A at 440 Hz (showing off!).. The sound arrives at the left ear (N1) 4 ms after you click the Start button, and at the right ear (N2) 5.5 ms after the click, giving an ITD of 1.5 ms. This is half the period of 3 ms, so there is a 180° phase difference in the sound wave at the two ears. This is visible in the stimulus monitor (lower trace). It corresponds to a sound originating from the left side (calculating exactly where would require doing some geometry and knowing the head size and speed of sound, so we won’t bother).

Both N1 and N2 show sinusoidal oscillations in response to the stimulus, but they only spike occasionally They both usually spike on the first sine wave peak, but after that much less frequently. The increased probability of spiking to the first sine wave is due to the low-pass filtering properties of membranes, which enhance the initial transient response to a varying input signal.. This is because the peak of the sine wave is very close to the spike threshold, and, because there is some membrane noise in the neurons, the peak of the sine wave sometimes crosses threshold but usually it does not. (See the tutorial on stochastic resonance for more details on how noise can benefit sensory detection.) However, when the neurons do spike, the spike is always quite tightly synchronized to the peak of the sine wave. This is similar to how real auditory neurons behave. When either neuron spikes, an EPSP occurs in N3, which is the left-most neuron in the coincidence detection layer (the remaining neurons are not shown). However, the EPSP from a spike in N1 (the left ear) will have a shorter latency than the EPSP from a spike in N2 (the right ear), due to the difference in the conduction delay from the two sources.

Task: When you see an EPSP in N3, identify which pre-synaptic neuron (N1 or N2) generated it, based on the latency. Activating a vertical cursor (click the toolbar button vertical cursor toolbar button) might help clarify the timing.

An important feature of the signal processing is now apparent – phase ambiguity. There is a clear difference in the time of arrival at the start of the sine wave (the transient disparity, reflecting the different delays set for the stimuli) so this would be a good measure of ITD. However, this generates a single spike at best in each neuron. After that, the disparity in the continuing signal (the ongoing disparity) is ambiguous. Because the sensory neurons do not spike on every peak of the sine wave, and because it is random which peaks they do spike on, spikes in the two neurons separated by 1 sine-wave period might reflect a genuine ITD of 3 ms, or it might reflect an ITD of 0 ms where spikes just happened to occur on successive peaks, or, indeed, the real disparity could be any multiple of the period.

This is exactly the same circuit and stimulus, but the Results view is different. We are now looking at just the spikes, which are each represented by a dot, and we can see the activity of all the neurons in the circuit. The top 2 neurons (red and blue) are the auditory receptors, and these spike quite frequently (the dots are close together), while the remainder (green) are neurons in the CD layer, and these spike less frequently.

Let the simulation run for a while, and look at the spikes in the neurons in the CD layer. Hopefully, you will start to see that there are two diffuse “bars” of dots (spikes) centred around N9 and N16. These rows have a higher density of dots than the intervening rows around N4, N12, and N20.

This can be shown more clearly as follows.

This activates a new dockable window, the Spike Frequency display. It should already be set up with the X axis representing neurons 3 – 21, i.e. the neurons of the coincidence detection layer.

The Spike Frequency window shows a bar chart of the cumulative spike frequency of each neuron in the CD layer, which updates as the simulation progresses. After a while it should be clear that there are peaks in frequency centred around neurons N9 and N16, and troughs at N4, N12 and N19.

The sound waves are now arriving simultaneously at the two ears, so this corresponds to a sound origin on the midline. The peaks in the graph now occur where previously there were troughs, and vice versa. The network can discriminate between the two sound origins!

Two things might immediately strike you about this simulation. First, there is a lot of random noise in the timing of spike occurrences, so it takes quite some time to establish the true pattern. This actually reflects the experimental findings when recording from individual neurons in the CD layer in owls, where the Jeffress-like output is only established over multiple trials. However, since the owl can determine sound location very rapidly, it is likely that in the intact animal there are many similar circuits operating in parallel, and that the animal can rapidly establish the true pattern by averaging the outputs of these circuits.

The second noteworthy feature is that there are multiple peaks in the graph. The sound has a single location, but the output seems to be ambiguous.

The sound is now coming from the extreme left, but the output is the same as that generated by sound from the midline. The output is definitely ambiguous!

For comparison the outputs are shown below, with different colours for the different sound origins. These graphs are very similar in shape to those published as Figure 12 by Carr and Konishi (1990).

a Jeffress spike frequency 1      b Jeffress spike frequency 2      c Jeffress spike frequency 3
Spike frequency in the coincidence detection layer. a (green). ITD 1.5 ms, part left sound origin. b (red). ITD 0 ms, midline sound origin. c (blue). ITD 3 ms (the same as the stimulus period), far left sound origin. The response pattern is the same for b and c.

What is going on?

Remember that we are using a pure-tone sine wave as a stimulus, and that individual sensory neurons spike randomly on the peaks of the wave. With the midline origin and extreme left origin, the simulation is set up so that the difference in time of arrival is exactly 1 period of the sine wave, so apart from the very first cycle, the sine waves are identical at the two ears! However, even when the waves have a phase offset, it is not possible to distinguish between sound locations whose time of arrival differ by a whole cycle period. Hence the multiple peaks on the graphs.

This phase ambiguity can be demonstrated in intact, live animals, but only if they are given a pure-tone stimulus. In an experiment with such a stimulus, the bird will be confused and choose randomly between different locations at fixed angular intervals. So it is not a problem with the model, it is a problem with the animal! However, pure tones are very rare in nature, and as soon as the sound stimulus contains mixed frequency (like the white noise of a rustling sound) the animal can determine sound origin very accurately.

How do mixed frequencies resolve phase ambiguity?

There is good evidence that there are multiple Jeffress circuits, with each one tuned to a particular sound frequency. So with a pure tone, only one gets activated. If there is a mix of 2 tones, 2 circuits get activated. Both these circuits show phase ambiguity, but the peaks have different separations, corresponding to the different periods of the stimuli.

In this version of the model, a second set of auditory neurons (N22 and N23) and a new CD layer (N24 – N42) have been added. These are identical to the original set, but the auditory stimulus frequency has changed. The new stimuli (3 and 4) each produce sine waves with a period of 4.5 ms, giving a frequency of 222 Hz rather than the 333 Hz of the first setThe amplitude of the lower-frequency current is also slightly lower, so that it produces the same sized voltage deflection as the higher frequency stimulus in the sensory neurons. Remember that the neural membrane acts as a low-pass filter due to its RC properties, and without the change the higher frequency would produce a smaller response.. The ITD is the same as the first set because the speed of sound is independent of its frequency.

A third layer called the integration layer has been added. This receives mapped excitatory input from equivalent neurons in both CD layers; i.e. the leftmost 333 Hz CD neuron and the leftmost 222 Hz CD neuron both make input to the leftmost integration layer neuron, and so on.

In this implementation the integration layer neurons are not coincidence detectors. In fact, the EPSPs are a different type (type b) compared to those impinging on the coincidence detectors, and they are sufficiently large that they initiate a spike in response to input from either CD neuron. The integration layer thus acts as a logical-OR integrator (this is not based on experimental evidence; it is in fact just a hypothesis about how the system might work).

Note that the graph is monitoring neurons in the 333 Hz CD layer (N3-N21).

We already seen this result – there are two peaks of similar height in the graph, which thus demonstrates phase ambiguity as before.

Note that the right-hand peak is in the same location in both CD layers, but the left-hand peak is shifted to the left in the 222 Hz layer compared to the 333 Hz layer.

The output of the 3 layers is shown below. Note that in the integration layer graph (on the right below) the vertical scale has been changed.

a Jeffress integration layer 1   b Jeffress integration layer 2   c Jeffress integration layer 3

Spike frequency in the coincidence detection and integration layers. a (red). Coincidence detection in 333 Hz circuit. b (blue). Coincidence detection in the 222 Hz circuit. c (green). Integration layer. The integration layer is a summed response to the coincidence detectors.

In the two CD layers, only the “correct” peak (the right-hand one) is in the same location, so when activity is summed in the integration layer, this peak is enhanced. The other peaks flatten out, as the peak in one CD layer coincides with a trough in the other CD layer. And thus the summed output of the integration layer has a single dominant peak in the correct location, and the ambiguity can be resolvedWith just two frequencies being integrated there is still a small but definite peak in the 'incorrect' location. However, hopefully you can see that if there were multiple frequencies the multiple incorrect peaks at different locations would flatten out to a stable baseline, with just the 'correct' location showing a significant peak..

 


Learning Networks

In its simplest form, the Hebb rule for synapses says that if a neuron fires a spike shortly after a pre-synaptic neuron excites it though a Hebbian synapse, then that synapse is strengthened. This simple concept forms the basis of many theories regarding learning mechanisms, although the mechanism that actually implements Hebb learning is still much debated.

Classical Conditioning: Pavlov’s Dog

At a Hebbian synapse it does not have to be that synapse itself that triggers the post-synaptic spike to enhance the connectivity, a completely separate excitatory input can have the same effect. All that has to happen is that the cell post-synaptic to the Hebbian synapse spikes shortly after the Hebbian synapse is activated.

Hebbian synapses can thus provide a cellular mechanism analogous to classical (Pavlovian) conditioning.

In the Setup view we see three neurons that we will pretend exist in the brain of one of Pavlov’s dogs. [This is very much a concept diagram – it certainly does not reflect reality!] In the Results view we see recordings from these neurons.

N1 (top trace) is an olfactory meat detector. It spikes when a meat stimulus (stimulus 1 in the Setup view) is delivered: this is the unconditioned stimulus (US). N1 makes a strong excitatory synaptic connection to N2.

N2 (middle trace) is a command neuron in the saliva-producing motor path. It spikes when it receives an EPSP from N1, and when it spikes it makes the dog drool (remember, we’re just pretending that such a neuron exists).

N3 (bottom) is an auditory neuron that responds to ringing a bell, which is the conditionedI HATE this terminology. It is the response that is conditioned, the bell stimulus does the conditioning. So it would make much more sense to call the bell a conditioning stimulus. However, my objections are not going to change more than 100 years of usage! stimulus (CS) which does not normally elicit saliva. A bell is rung 3 times at fixed intervals, producing spikes in N3. N3 makes a weak Hebbian excitatory input to N2, which is visible as small EPSPs following each N3 spike.

Note that the smell of meat (N1 spike) elicits saliva (N2 spike), while subsequent bell-ringing (N3 spikes) does not. Thus when the US precedes the CS, the CS does not show any response augmentation. So far so good.

The first bell-ring now precedes the smell of meat, and subsequent bell-rings produce an enhanced response in N2, i.e. the EPSPs are bigger, causing a greater chance of drooling. However, they are still sub-threshold. The association between bell-ringing and the smell of meat is not strong.

As you reduce the delay, you bring the US closer to the CS (the bell sound follows the meat stimulus at a shorter interval). As the interval declines, the bell-induced EPSP gets bigger.

When the delay between the US and the CS is very short, the augmentation increases to the point where the CS elicits a spike. The dog now drools in response to ringing the bell!!

Classical conditioning is an example of the general phenomenon called “associative learning”, in which the nervous system learns an association between two independent events.

Take-home message: In classical conditioning timing matters. In most cases, the shorter the interval between the unconditioned and conditioned stimulus, the more effective the learning process.

(Classical conditioning is an immensely complicated subject that psychologists have devoted much effort to studying. This simulation only gives a very simplified view of one aspect of it.)

Associative Learning: Pattern Completion

Pattern completion is something that we are all familiar with. If we catch a partial glimpse of a familiar object, we have no difficulty in recognizing it and reconstructing the entire object from memory. On the other hand, if we only get a partial glimpse of something that we have never seen before, then of course we are completely unable to recognize it (unless it looks sufficiently like a bit of an object that we have seen before, in which case we might make a mistake and think the new object is the old object - and that is a fascinating topic in its own right).

There is a huge experimental and theoretical literature on associative learning and pattern completion, and many detailed and very clever mathematical approaches have been taken. However, many such approaches have some sort of Hebb rule at their heart.

This looks like a bit of a spider’s web because there are so many connections, but it is actually a quite simple repeating pattern.

There are 3 layers, each with 10 neurons in it. The top layer is a feature detecting layer. Each neuron in this layer responds to a different general feature in an object. The system starts off not knowing anything, but if enough features are detected simultaneously, then together they are identified as an object, and this will generate an identical output pattern on the bottom layer, the object recognition layer. However, if there are insufficient features to constitute an object, then there is no output. After an object has been recognized, then presentation of just one feature in the object causes the recognition layer to make the same output that it does when presented with the entire object. The network has learned the features of the object, and it can now be recognized from just one of those features – this is pattern completion. Presentation of a single feature not in the original object still produces no output.

The process depends upon Hebbian synapses made by neurons in the middle layer of the network, which we could call the learning layer. I will discuss how it works in a moment, but first let’s see it in action.

The dots in the Results view represent spikes in the feature detector layer (N1 – N10, top half of display) and the object recognition layer (N21 – N30, bottom half of display). (The learning layer is not shown at this stage.)

A series of stimuli are presented to the feature detectors in 4 sequential stages.

First a single feature is presented to N3 (stimulus 1, latency 10 ms). This generates a spike in N3, but a single feature does not constitute an object, and there is no output from the recognition layer.

Next, 3 features are presented simultaneously to N3, N5 and N7 (stimuli 2, 3, and 4, each with latency 50 ms). Three features occurring simultaneously are enough to constitute an object, and neurons N23, N25 and N27 spike, thus representing that object in the recognition layer.

The network has now learned to recognize the object. In this case learning occurs after just one presentation of the feature set because the synaptic and threshold properties of the network were set up that way. It would be more realistic to require several presentations, but that would just complicate things for this demonstration.

Next a single feature is presented to N3 (stimulus 5, latency 90 ms). This is part of the feature set of the object that the network has learned, and the network reconstitutes the entire object in the recognition layer, and neurons N23, N25 and N27 spike, just like they did to the whole object.

Finally, a new feature is presented to N1 (stimulus 6, latency 130 ms). This is not a feature of the learned object, and it is not a new object itself (a single feature is not an object) and there is no output from the recognition layer.

Hebbian mechanism

A circuit fragment is shown below. The whole circuit is just repetitions of this fragment.

Learning circuit fragment
Fragment of a Hebbian pattern completion circuit.

Each neuron in the feature detection layer makes a rather weak excitatory synapse (type a) straight through to its partner in the object recognition layer (vertical connection between N1 and N5 in the diagram). This is not strong enough to make the recognition neuron fire on its own.

Each feature neuron also makes a strong excitatory synapse (type b) to its partner in the learning layer. This is strong enough that whenever the feature neuron spikes, so does the learning neuronIn terms of computational logic this makes the learning layer redundant since it simply duplicates spikes in the object recognition layer. However, its neurons make different types of output to those of the feature layer, and so it is easier to understand the network if it is kept as a separate layer. (N2 always spikes when N1 spikes).

Each learning neuron makes an initially weak Hebbian synapse (type c) to all the neurons in the recognition layer. This is not strong enough, even in combination with the direct activation from a single activated feature neuron, to cause the recognition neuron to spike.

However, if 3 features are presented, each of the 3 recognition neuron partners to the activated feature neurons gets excitatory input from a total of 4 sources; its own feature neuron, the learning neuron partner to its own feature neuron, and the learning neuron partner to the other two feature neurons that have been activated. Each of these is weak, but together the 4 inputs take the recognition neuron above threshold, and it spikes. This enhances the 3 Hebbian inputs deriving from the 3 learning neuron partners to the 3 activated feature detectors. The enhancement is such that now activation of a single learning neuron by a single feature neuron within the object will activate all three of the recognition neurons to which it makes enhanced connections. The recognition neurons can thus reconstitute the original object from just a single feature.

Limitations

This model of course has numerous limitations, one of the most obvious being that it rapidly saturates. If another object is presented that shares features with the first, then that will be learned too and subsequent output when presented with a feature from either object will be a mixed representation of both objects. To solve that one could introduce some sort of “competition” between outputs, perhaps by lateral inhibition, so that the recognition layer would be attracted to one or the other object patterns. However, that is far beyond the scope of this tutorial - there are many whole books entirely devoted to exploring possible neural mechanisms for associative memory and pattern completion. But hopefully this simulation provides food for thought, and at least reinforces the importance of the Hebbian concept in learning theories.

Facilities for Memory Models

The Memory menu provides commands that can be used to investigate learning processes in more complex networks. By default, learning is only retained for the duration of a single experiment, so that when the experiment terminates (by running to completion or by clicking the End button), anything learned in the experiment is lost (by the simulation, hopefully not by the user). The Retain Hebb memory command is a toggle that enables training to persist between experiments. This means that experimental conditions such as stimulus parameters can be changed between experiments, while still retaining the network in its trained state. But note that when this option is selected, the circuit itself cannot be altered and nor can the synaptic properties. Related to this facility is the Reset Hebb memory command, which puts all synapses back into their untrained, starting state. The List Hebb memory command allows you to examine the numerical values of the starting condition, current level of training, and potential maximum trained state of all Hebbian connexions. The Randomize Hebbian command sets all Hebbian connexions to some random value between their starting (naïve, untrained) condition and their fully trained state.

A critical period can be defined using the Set critical period command. A critical period is a time window during which learning takes place. Outside of this window, Hebbian synapses act like ordinary synapses. Once the critical period has been defined, it can be activated or de-activated by selecting the Use critical period command toggle. Finally, the Freeze Hebb memory command is a toggle that you can apply during an experiment. If you select this command during an experiment, no further training takes place until you de-select it. The Hebbian synapses are frozen into their current state of learning. This is a useful method of simulating a critical period “on-the-fly” during an experiment.

 


Wilson-Cowan Firing Rate Models

So far in this tutorial, network models have been made from a relatively small number of neurons, each with its own identity and its own properties, connected to the other neurons through specific synapses. The properties of the network arise from the interactions between these “identified neurons”. However, the nervous system of most animals has at least some regions containing very large numbers of neurons, which can only be distinguished from each other in terms of belonging to a particular category, or sub-population, rather than as individuals. It is unrealistic to try to model each neuron in such a population - instead some models just simulate the activity within sub-populations in terms of their overall spike rate. Such models are called firing rate models, and one of the most famous and influential was devised by Wilson and Cowan (1972). There is a video of Jack Cowan giving a fascinating lecture on the model and its history here. The model has had a lot of influence in simulation studies of higher brain function, and also higher brain malfunction, such as epileptic seizure.

Background Theory

A unit in a Wilson-Cowan model represents a large but spatially-localized population containing an interacting mix of excitatory and inhibitory neurons. The output from such a unit is two time-varying values, E and I, representing the activity levels of these sub-populations at that moment in time. The variables relate to the average spike frequency within the sub-population, but they are normalized to dimensionless numbers such that a value of 0 represents the background activity in the absence of overall excitation or inhibition. A negative value indicates that the activity has dropped below this background level, while a positive value indicates an activity level above background. The maximum possible activity level that the canonical model can produce is +0.5, while the minimum is -0.5, but the actual range of a particular model depends on the model parameters. This is not the most intuitive normalization, but it is what the equations of the original model produce, and since the model is well embedded in the literature, we will stick with it.

The key concepts underlying the model are as follows. The neurons within the unit have thresholds that vary randomly, with a Gaussian distribution, around some average value. The neurons are connected together randomly, with uniform probability, and the connections are dense enough that any neuron can interact either directly or indirectly with any other neuron in the population. All excitatory neurons excite any neuron to which they are connected, and all inhibitory neurons inhibit any neuron to which they are connected.

There are thus four types of interaction: excitatory-to-excitatory (EE), excitatory-to-inhibitory (EI), inhibitory-to-inhibitory (II) and inhibitory-to-excitatory (IE).  Each of these has an average connectivity coefficient, or weight, wEE, wEI, wII and wIE, which represents the average number of synapses mediating that type of interaction per neuron. The strength of input to a sub-population thus depends on the activity level of the source sub-population E or I, multiplied by the weight of the interaction that mediates the input. The output of a sub-population depends on the balance of excitatory and inhibitory input that it receives.

Wilson Cowan unit
A Unit in the Wilson-Cowan Model. In a large but spatially-localized population of neurons, E represents an excitatory sub-population, I represents an inhibitory sub-population.

These concepts were formalized by Wilson and Cowan into a pair of coupled differential equations. The justification for this was quite complex, but the final equations \eqref{eq:eqWilCowE} and \eqref{eq:eqWilCowI} are quite simple, and their solution yields the activity levels of the state variables, E and I.

\begin{align} \label{eq:eqWilCowE} \tau_e \frac{dE}{dt} &= -E +(k_e -r_eE) *\Re_e(w_{EE}E -w_{IE}I +X_e) \\[1.5ex] \label{eq:eqWilCowI} \tau_i \frac{dI}{dt} &= -I +(k_i -r_iI) *\Re_i(w_{EI}E -w_{II}I +X_i) \end{align}

On the left-hand side, the time constant τ sets the rate at which the activity level decays to the background level if all input is removed. On the right hand side, r is the absolute refractory period. For simplicity, Wilson and Cowan gave this a value of 1 throughout their analysis, and Neurosim does the same. So it can be ignored from now on. The value k is dependent on the maximum activity level, and we will come back to this shortly. The symbol  \(\Re\) denotes the response function \eqref{eq:eqWilCowResponse}, which is a function giving the fraction of the sub-population receiving above-threshold input. The argument to \(\Re\), contained within the brackets to the right of the symbol in \eqref{eq:eqWilCowE} and \eqref{eq:eqWilCowI}, is the sum of the within-subpopulation excitatory and inhibitory inputs as described in the previous paragraph, combined with any external input X. The external input takes account of factors such as an experimental stimulus, tonic input from some “higher brain region” (a catch-all term meaning an unspecified but necessary source), or input from a subpopulation in a different Wilson-Cowan unit. The overall external input can be excitatory (X is positive), or inhibitory (X is negative).

The response function used by Wilson and Cowan was a sigmoid function, shifted downwards so that \( \Re(0)=0\), as defined by equation \eqref{eq:eqWilCowResponse} below. They point out that any similar sigmoid function would do, but since this is the form frequently used in the literature, we will stick with it.

\begin{equation} \Re(x) = \frac{1}{1 + e^{-\lambda (x-\theta)}} - \frac{1}{1 + e^{\lambda\theta}} \label{eq:eqWilCowResponse} \end{equation}

The first (leftmost) part of the right-hand side of this equation defines a sigmoid curve in the range 0 : 1, whose maximum slope occurs at x = θ, and whose maximum slope value is λ. The second part, which could be called the shift parameter, has a fixed value (it is independent of the input argument) determined entirely by the parameters λ and θ.   It is numerically equal to the first part when x = 0, so subtracting it from the first part shifts the output downwards so that \( \Re(0)=0\). This means that when excitatory and inhibitory inputs to a sub-population are exactly balanced (the summed argument to \(\Re\) is indeed 0), the output of \(\Re\) is 0 and the sub-population activity relaxesIf dX/dt = -X, equilibrium is reached when X = 0. to its background level, which by definition is 0.

In a physiological context, the parameter θ relates to the average threshold of the population, and the slope λ relates to the variability around this averageIn formal terms, this makes the response function the integral of the probability density of the threshold distribution. (the shallower the slope of the sigmoid curve, the greater the variability).  The shift parameter represents the level of background activity. The higher the background activity, the less “room” there is for excitatory input to increase activity, because neurons have a maximum firing rate. This will reduce the fully-activated value of E or I below the value of 0.5, which is the absolute model maximum. This reduction is reflected by the factor k in the derivative equations \eqref{eq:eqWilCowE} and \eqref{eq:eqWilCowI}. Specifically, k = 1 – the shift factor.

Neurosim Implementation

A single Wilson-Cowan unit is visible in the Setup view, containing a greenish circle with an E in it representing the excitatory component (sub-population), and a reddish circle with an I in it representing the inhibitory component. The four possible interactions within the unit are indicated by the squares for excitation, and circles for inhibition, attached to the sides of the two components. Thus the square a is the mutual re-excitation that the excitatory component delivers to itself, while the square b is the excitation that it delivers to the inhibitory component. The circle c is the inhibition that the inhibitory component delivers to the excitatory component, and the circle d is the mutual inhibition that the inhibitory component delivers to itself.
The letters ad represent the strength of the interaction. The actual value is contained in a lookup table.

The reason for placing the weights in a look-up table is that it makes it easier to edit properties in complex models, so simply changing the weight in the table can immediately change many interactions without having to edit them each individually. However, it means that there is a limit of 26 (a-z) different interaction weights in the Neurosim implementation of Wilson-Cowan models.

With the default parameters, the unit produces oscillations in both the E and I components, with E slightly leading I. We will discuss this in more detail below.

Stable States

Bi-stable output

First, just look at the unit in the Setup view.

Now look at the Results view. The upper trace shows the level of activity in the excitatory component (E), and the lower trace shows the level of tonic input to the E component. (We are not monitoring the activity of the inhibitory component in this exercise.)

At the start, E has just background activity (the Initial level in the Properties dialog is 0). However, once the simulation starts, the E component receives immediate inhibitory tonic input of -0.6. This causes a drop in the background level of the E unit until it reaches a stable value a bit below the initial background level.

Note that the E level drops to a very slightly less negative value. It receives less inhibition, so hopefully that makes sense.

Note that for several clicks there is very little change in the output, but when you reach a stimulus level of 0.9 (remember, this is superimposed on Tonic input of -0.6, so the absolute amplitude is +0.3), the E level starts to increase significantly, and with the next click it jumps up to a substantially higher level, quite close to 0.5, which is actually the maximum level that any Wilson Cowan model can produce. Further increase in excitation makes the jump occur earlier, but produces only a small additional increase in the E level.

Important: don't forget what the model represents - the value of E is the overall relative activity level in a large population of neurons, not the response of a single neuron.

Both the raw results and the XY plot show that the model essentially has 2 stable states – a low level that is maintained with excitation less than 0.4, and a high level that occurs with greater excitation. There is thus a threshold very similar to that of a spike in a single neuron, and conceptually, the underlying cause is rather similar. In a single neuron the threshold is due to positive feedback from voltage-dependent sodium channels, in the Wilson-Cowan model it is due to positive feedback from mutual re-excitation within the excitatory sub-population. When this reaches a certain level, it drives the population “hard over” towards its maximum level. The detailed characteristics of this depend on the other interactions, but the positive feedback of mutual re-excitation lies at the core.

Take-home message: With these parameter choices, the Wilson-Cowan model displays bi-stability. The excitatory component output takes one of two levels, dependent on the amount of tonic input it receives.

Hysteresis

What happens if we try to reverse the process?

This has the same starting conditions as the previous file, but this time there are two stimuli attached to the E sub-population of the unit.

As before, the flip should occur with the stimulus amplitude reaches a value of 1. (Remember, this is superimposed on Tonic input of -0.6, so the absolute amplitude is +0.4.)

Note that after a delay, a negative stimulus now adds onto the positive value of the stimulus 1, so that the total stimulus is reduced. However, the E level remains in the high state, even though the total stimulus is now less than that which was required to flip it into the high state when it was starting from the low state.

The E level remains in its high state until the total stimulus has reduced almost back to its initial tonic level. We can do a similar plot to the previous one:

You should now see a typical hysteresis graph:

Wilson-Cowan hysteresis
Hysteresis in the Wilson-Cowan model. Bi-stable output shows hysteresis. The X axis shows tonic excitation applied to the E population, the Y axis shows the E activity level. The arrows have been added to show the direction of time.

 

Take-home message: The bi-stable Wilson-Cowan configuration shows hysteresis – a process in which the value of a property lags behind a change in the value of the thing causing it. It is as though the activity level is “sticky”, so that when starting from a low level it takes a big increase in excitation to shift the activity to a higher level, but once there, it takes a big decrease in the excitation to shift it back down again.

Tri-stable output

The model is very similar to the previous bi-stable model, but there are small differences in the weighting and response function parameters, and the level of tonic input.

Note that there are now three stable states in the E activity level – a low value of about 0, a middle value of about 0.2, and a high value of about 0.45.

Task: Identify the points in the graph that are common to the sequence of both the increasing and decreasing stimulus strength, the points that are unique to the sequence of increasing stimulus strength, and the points that are unique to the sequence of decreasing stimulus strength. Thinking this through should help you to understand hysteresis in the system.

Tri-stable mechanism

It is not easy to predict the output of a dynamical system containing 4 interacting feedback loops (which is why we use simulation in the first place), but by separating out the components we can get at least some understanding of how it works.

First note in the Connection Weights dialog that all the feedback weights used in the circuit (a-d) have been set to 0, so there are no interactions within the population. Next note in the Properties dialog that the Tonic input to E is set to -0.2 as previously, but that in the Setup view an external stimulus has been added to the E sub-population, and this stimulus is selected in the list in the Experimental control panel. Finally, note that there is a flat red line showing in the Results view, which is the level of the inhibitory component (I). With no interactions and no tonic input to I, this is just 0, which is the background level.

The value of E shows a constant-sized step increase in the Results view, which rises and falls in time with the stimulus onset and offset. However, the rise is not instantaneous, but follows an exponential time course. It looks rather like the passive RC response of a single neuron to injected, but remember that it represents the overall activity level of a population of many neurons. The exponential response follows from the core design of the model, which is for a large population of neurons with varying thresholds and in varying states of refractoriness at any moment in time.

With this stimulus, the E level shoots up to a maximum (rather like the rising phase of a spike), and it stays there even after the stimulus terminates. This is because the E population now has recurrent excitation (positive feedback) with non-zero weight, so when the level of activity reaches a certain value, it becomes self-reinforcing and rapidly accelerates to its maximum value. Furthermore, it stays there, because it is now self-sustaining and no longer needs a positive stimulus.

The E response is exactly the same as before, but now the I response also takes off, because it receives excitatory input from E. However, it does not have any feedback connection to E, so the I activity has no influence on E (or on itself, since there is no recurrent inhibition yet).

At this point, there is a sudden drop in the E and I levels. They both oscillate vigorously, but the oscillations are damped and the activity level stabilizes after a short time. Furthermore, the system now returns to its initial state after the termination of the stimulus.

The introduction of negative feedback inhibition through the connection weight c evidently produces a new, mid-level, response that is stable with this level of stimulation. It diminishes the level of E activity, which in turns diminishes the I activity, but a stable state is reached considerably above the resting state, but well below that of maximum activity that results from only positive feedback.

With each increase in inhibition within the I sub-population, the oscillations in E and I stabilize more quickly. It was not obvious in advance that this would occur (at least to this author), but it seems that the mutual inhibition within the I sub-population damps the oscillations within it. There is also a rather counter-intuitive increase in the stable I activity level as the mutual inhibition increases. It appears that the initial reduction in I due to increased mutual inhibition within the sub-population releases E from inhibition, which then leads to greater excitation of I, resulting in an overall increase in the I level, despite the increased inhibition!

We are now back to the original configuration which resulted in the tri-stable output. Which is a good point at which to move on.

Oscillations

We have a Wilson-Cowan unit whose E component receives Tonic input of 1.  This creates a stable depolarization in the E unit throughout its duration, accompanied by a very small increase in I due to the EI interaction, but nothing very interesting happens.

The unit now oscillates. It takes a short while for the positive-feedback excitation in E to build up, but when it “takes off” it increases activation of the I sub-population, and this feeds back onto E to shut it down. The loss of excitation of I reduces its inhibitory effect, allowing E to escape again. And so it goes on.

Conceptually, the oscillation mechanism is rather similar to that of some single-neuron endogenous bursters. In the case of a burster the oscillation may result from the interaction between a voltage-dependent calcium current mediating positive feedback and a calcium-dependent potassium current mediating delayed negative feedback. In the Wilson-Cowan oscillator, the E population mediates positive feedback by exciting itself, and the I population mediates negative feedback because it inhibits E, but with a delay because the I population is only activated by the E population.

The EE, EI and IE connections are all necessary for the oscillations, but the II recurrent inhibition is not essential.

This sets the II weight to zero. The system still oscillates, but at a lower frequency. However, the frequency could be increased by increasing the Tonic input level.

Take-home message: With appropriate choices of parameter values, the Wilson-Cowan model can produce rhythmic oscillations in activity level.

Phase Plane Analysis

As described for the reduced Moris-Lecar model earlier, there is a well-developed mathematical framework known as phase-plane analysis for investigating the properties of dynamical systems involving a pair of coupled differential equations. This can tell whether the system is stable, multi-stable, oscillatory, chaotic, explosively unstable etc. Since the Wilson-Cowan model is just such a system of paired equations it is amenable to this analysis, which is one reason why the model is so popular (the other being that it yields valuable insights into real-world problems in neuroscience).

Mathematical analysis in the phase plane is beyond the remit of this tutorial (and the capability of its author), but in its simplest form it just involves plotting the two state variables of the equations (E and I) against each other to produce what is called a phase portrait. This gives an alternative view of the shape of the system response, which can emphasise features that are not immediately obvious in a normal time-series plot.

The Phase Plane graph appears in a new window. You may need to move it so that it does not obscure any controls or results.

A rather small vertical line appears at the bottom-left of the Phase Plane graph. This reflects the fact that E increases during the stimulus, causing a shift along the Y axis, whereas I remains almost constant at 0, hence no shift on the X axis.

The phase plane plot is now very different. We can see this more clearly by expanding the plot and (probably) slowing the simulation.

You can now observe the phase-plane plot evolve as the simulation progresses. It starts at the bottom-left because the Initial level of E and I is set to 0. As the Tonic input takes effect it climbs up gradually as the E value increases, and then swings into a series of identical clockwise loops, which superimpose exactly on each other, during the oscillations.

While the simulation runs the phase-plane plot is in black-and-white, but once it terminates, it becomes colour-coded by time (if the Colour time box is checked), with blue being the start, and yellow the end time.

The evolution of the phase plane can be seen more clearly in a 3-D plot.

Phase plane loops like this are called a limit cycle, and they are indicative of oscillatory behaviour. Limit cycles are examples of attractors, because they indicate a state that a dynamical system will end up in, even from a wide variety of starting conditions.

The plot now starts at the top-left of the graph, but is attracted into the same limit cycle.

The plot still displays a limit cycle, but spirals round a few times before it achieves it. This is apparent in the normal Results view as a series of oscillations with decreasing amplitude, which stabilize at a fixed but reduced level.

The Results view shows that the oscillations diminish in amplitude until they disappear completely. In the Phase Plane the plot spirals into a single point. This new attractor is called a stable focus.

Multi-Unit Models

A Wilson-Cowan unit represents a large but local population of neurons, such as a small chunk of cerebral cortex, or neurons within a segmental ganglion in an invertebrate. However, it is obvious that such local populations may be able to interact with similar nearby populations. This can be modelled using multiple Wilson-Cowan units, with connections between the E and I sub-populations of the different units, in exactly the same way as the connections are made within a unit.

There are a pair of Wilson-Cowan units, each of which oscillates due to mechanisms described in the previous section. However, the Tonic input to E is slightly different between the two (1.74 to E1, 1.70 to E2), so they oscillate at different frequencies. At this stage there are no connections between the units, so they operate completely independently.

The top axis shows E and I of unit 1, the middle axis shows E and I of unit 2, and the bottom axis shows E of 1 and 2 superimposed. The difference in frequency is obvious in the bottom trace, where the E activity starts in phase, but then drifts out of phase as time passes.

Synchronizing Oscillations

To synchronize the oscillators, we obviously have to couple them together somehow. Just as within an individual unit, there are four patterns of connectivity that can occur between two units: EE, EI, IE and II. We will try each of these in turn.

The Setup view should now look like this

Wilson-Cowan paired EE oscillators
Paired Wilson-Cowan Oscillators. There are EE connections.

There is no change in the Results view, because the 0 weight of Type e means that the connections are ineffective.

The Results view shows that the oscillators are becoming phase-locked towards the end of the simulation, although they have different amplitudes, and there is a phase lag between the two E values. This is also apparent in the Phase Plane view, where there is an indication of the development of a limit cycle as an elliptical circle late in the plot (the yellow end of the colour coding).

There is now tight phase-locking between the oscillators, and the phase lag is either very short or non-existence. The phase portrait in the Phase Plane view is approaching a diagonal line, although the differences in amplitude prevent it becoming completely linear.

The oscillations in the E value in units 1 and 2 are now almost identical. The phase portrait is now an almost linear diagonal plot.

It is evident that mutual excitation between adjacent excitatory sub-populations in oscillating Wilson-Cowan units can synchronize the activity of the units, as was hypothesized at the start of this section. But are there other ways of achieving synchronization?

Wilson Cowan paired EI oscillators
Paired Wilson-Cowan oscillators. There are EI connections.

With E-I coupling between units the Results view shows tighter coupling than was achieved with E-E coupling, and this is confirmed in the phase portrait, which is much closer to a diagonal line. So what about coupling originating from the I sub-units?

Wilson-Cowan paired IE oscillators
Paired Wilson-Cowan oscillators. There are IE connections.

The coupling is even tighter than with the previous configuration. Remember, with EE coupling we needed a connection weight of 3 to achieve tight coupling, but with the IE connection, a weight of 1 achieves similar coupling.

Take-home message: In neural circuits it is often easier to achieve coupling by interactions involving inhibition, rather than through purely excitatory effects. This is probably a general principle of many central pattern generators and rhythmic cortical activity.

Anti-Phase Synchronization

We have seen that three of the four potential coupling patterns (EE, EI and IE) all produce synchronization between units with this parameter set. However, many pattern generators, particularly those involved in locomotion, require anti-phasic coupling between oscillators. Can the remaining untested coupling pattern, II, achieve this?

Wilson Cowan paired II oscillatior
Paired Wilson-Cowan oscillators. There are II connections.

The Results view shows that the oscillations start off in phase, but they rapidly undergo a transition to anti-phasic coupling – when the E1 level is high, the E2 level is low, and vice versa.

The phase portrait is a bit complicated due to the transition, but the red-yellow end of the time spectrum is approaching a repeating skewed figure-of-eight shape, which is a form of limit cycle.

To make this clearer, we could run the simulation for longer. However, another option is to introduce a settling time delay.

The simulation now runs for 500 ms in the “background” without displaying anything. This gives the differential equations time to reach the stable limit cycle condition. The figure-of-eight shape of the phase portrait is now very clear. The “waist” of the 8-shape is the cross-over point visible in the Results view, where the two E values are the same. It occurs twice per cycle. The top-left part of the plot is where E1 (Y axis) is high and E2 (X axis) is low, while the bottom right is where E1 is low and E2 is high.

Take-home message: Anti-phasic synchronization can be achieved by coupling the inhibitory sub-populations of a pair of Wilson-Cowan units. Once again, this emphasizes the importance of inhibition in neural activity.

Fly larval crawling

The fruit fly (Drosophila melanogaster – the famous animal used as a model system in a multitude of genetic studies) has a soft-bodied larval form known as a grub, and grubs can crawl around using peristaltic waves of contraction generated in the abdomen. There are 8 abdominal segments, and for normal forward locomotion they use waves that start at the back (A8) and propagate forward, and for occasional backwards locomotion they use waves starting at the front (A1) and propagating backwards. The rhythm can be produced by the isolated central nervous system and does not require sensory feedback for its generation, although such feedback can stabilize the rhythm and increase its frequency.

Gjorgjieva et al. (2013) used single Wilson-Cowan units to model the 8 segments of the abdomen, and coupled each unit to its nearest neighbour to achieve intersegmental coordination.  They set up reciprocal excitatory coupling between E units in adjacent segments, and inhibitory coupling from each I unit to the E units in adjacent segments.

This is an implementation of the Gjorgjieva model (the starting model – they enhanced it with sensory input and bilateral oscillators later in the paper). The intersegmental excitatory connections are the yellow squares showing type e, and the inhibitory connections are the yellow circles showing type f. Note that two stimuli are set up (the white square boxes). Stimulus 1 is applied to A8, and represents brain activation of forward crawling, and stimulus 2 is applied to A1 and represents brain activation of backward crawling.

Moderately prolonged activation of A8 generates two peristaltic waves that move forward along the abdomen, such as would cause forward crawling in the larva. This is followed by a briefer activation of A1 generating a single backward peristaltic wave. The activity levels in the Setup units are colour-coded, and the peristalsis is visible in the Setup view as a wave of colour propagating along the chain of units. In the Results view the waves are seen as standard changes in E (green) and I (red) activity levels plotted against time.

Take-home message: Appropriate symmetrical inter-segmental coupling can produce peristaltic waves that propagate in either direction, depending on the site of initiation.

The contributions of the intersegmental connections can be seen by editing their weights.

Note that A8 produces multiple oscillations. This is because the internal connections of each unit have the original Wilson-Cowan parameters that produce oscillations as described above . The “brain” stimulus provides an episode of tonic excitatory input to A8 enabling 3 cycles of oscillations, while the briefer input to A1 allows only 1 cycle. None of the other units receive any tonic input, so they are silent.

A wave of excitation propagates along the network, with the same sort of inter-segmental delay as the intact system. The phase progression of the starting model is thus due to the time taken for the excitation to build up to the “take off” threshold in each segment. However, without the intersegmental inhibition, once the E level has taken off, it sticks at the high level.

There are now propagating waves of peristalsis, but they are not of equal duration in each segment, with the initiating segments having longer duration, which diminishes as the wave progresses.

The intersegmental connections are providing some of the same circuit “concepts” as the within-unit connections; mutual excitation and feedback inhibition. Can they actually replace them?

The system still produces peristaltic waves with broadly similar properties to those of the starting model. This is interesting, because although the connectivity “concepts” of the two systems have a lot in common, they clearly have rather different connectivity at the anatomical level of actual neurons and synapses. This is another example of the well-established fact that a model may produce an output that looks like that of a real system, but that does not prove that the real system actually works like the model.

 


On to Kinetics of Single Ion Channels ...