A simple feedforward network
In this task we consider a so-called synfire chain, an illustration of which can be seen in the next image:
A synfire chain is a chain of numb_pops
“excitatory” populations (red), each consisting of \(n_\text{exc}\) neurons, which are connected via excitatory connections, such that events in a prior population excite the neurons in the following population.
So, if the first population is activated, the activity begins traveling through the chain.
In order to stop a population from firing as soon as it has excited the next population it is also connected to an “inhibitory” population (blue) of \(n_\text{inh}\) neurons via inhibitory synapses.
This inhibitory population is excited by the spike events from the previous excitatory population.
The number of neurons in the “excitatory” and “inhibitory” populations, as well as the strength of the connections in between needs to be chosen carefully in order for the activity to travel neatly. The chain can also be closed by connecting the last population to the first one, making the activity travel through the chain repetitively.
Note
This print-friendly version of the instructions only lists the most central functions and code blocks. The interactive variant provides additional helpers.
Experiment setup
You are given a scaffold to run the experiment where populations are created and connected, and finally also stimulated. This scaffold can be used to investigate the behaviour of a synfire chain and answer the questions below.
ProjCollType = Dict[str, Union[List[pynn.Projection], pynn.Projection]]
PopCollType = Dict[str, Union[List[pynn.Population], pynn.Population]]
def setup_network(numb_pops: int, pop_sizes: Dict[int, np.array],
closed: bool = False) -> Tuple[ProjCollType, PopCollType]:
"""
This function generates a synfire chain network
Attention: You need to run 'pynn.end()' before a re-setup of a network
is possible!
:param numb_pops: chain length
:param pop_sizes: number of neurons in the excitatory ('exc') and
inhibitory ('inh') populations
:param closed: indicates if the chain is closed, defaults to False
:return: projections (as dict) and populations (as dict)
"""
# The refractory period needs to be small to allow regular spiking
neuron_params = {"refractory_period_refractory_time": 50}
# Setup pyNN with a calibration
pynn.setup(initial_config=calib)
#############################
# create neuron populations #
#############################
pop_collector = {'exc': [], 'inh': []}
for syn_type in ['exc', 'inh']:
for _ in range(numb_pops):
pop = pynn.Population(pop_sizes[syn_type],
HXNeuron(**neuron_params))
pop.record(['spikes'])
pop_collector[syn_type].append(pop)
# record membrane potential from first neuron of first excitatory
# population of chain
pop1exc = pop_collector['exc'][0]
pop1exc[[0]].record('v', device='pad_0_buffered')
pop1exc[[0]].record('v', device='madc')
# kick starter input pulse at t = 0
stim_pop = pynn.Population(pop_sizes['exc'],
SpikeSourceArray(spike_times=[0]))
#################################################
# connect neuron populations to a synfire chain #
#################################################
proj_collector = {'exc_exc': [], 'exc_inh': [], 'inh_exc': []}
# connect stim -> exc
proj_collector['stim_exc'] = pynn.Projection(
stim_pop, pop_collector['exc'][0], pynn.AllToAllConnector(),
synapse_type=StaticSynapse(weight=0), receptor_type='excitatory')
# connect stim -> inh
proj_collector['stim_inh'] = pynn.Projection(
stim_pop, pop_collector['inh'][0], pynn.AllToAllConnector(),
synapse_type=StaticSynapse(weight=0), receptor_type='excitatory')
for pop_index in range(numb_pops):
# connect inh -> exc
proj_collector['inh_exc'].append(pynn.Projection(
pop_collector['inh'][pop_index],
pop_collector['exc'][pop_index],
pynn.AllToAllConnector(), synapse_type=StaticSynapse(weight=0),
receptor_type='inhibitory'))
# if synfire chain is not closed, the last exc -> exc and exc -> inh
# that connects back to the first population needs to be skipped
if (pop_index == numb_pops - 1) and not closed:
continue
# connect exc -> exc
proj_collector['exc_exc'].append(pynn.Projection(
pop_collector['exc'][pop_index],
pop_collector['exc'][(pop_index + 1) % numb_pops],
pynn.AllToAllConnector(), synapse_type=StaticSynapse(weight=0),
receptor_type='excitatory'))
# connect exc -> inh
proj_collector['exc_inh'].append(pynn.Projection(
pop_collector['exc'][pop_index],
pop_collector['inh'][(pop_index + 1) % numb_pops],
pynn.AllToAllConnector(), synapse_type=StaticSynapse(weight=0),
receptor_type='excitatory'))
return proj_collector, pop_collector
# Initially setup a network
projs, pops = setup_network(
numb_pops=8, # chain length
pop_sizes={'exc': 7, 'inh': 7}) # size of each chain link
With the network setup we can configure and run it.
def set_network_weights(weights: Dict[str, int],
projections: ProjCollType):
"""
Sets weights in the network.
:param weights: unsigned weights to be set
:param projections: projections where the weights should be applied
:raise ValueError: if field name in weights can't be found in
projections
"""
for name, weight in weights.items():
if name not in projections:
raise ValueError(f"Invalid field name in weights: '{name}'."
f"Possible fields: {list(projections.keys())}")
if type(projections[name]) == list:
for proj in projections[name]:
proj.setWeights(weight)
else:
projections[name].setWeights(weight)
def run(populations: PopCollType, duration: pq.Quantity) \
-> Tuple[Dict[str, np.ndarray], np.ndarray]:
"""
Perform the configured experiment.
:param populations: population collector to extract some network information
:param duration: emulation time in ms
:return: spikes of all neurons and membrane trace of the first exc. neuron
"""
# emulate the network
pynn.reset()
pynn.run(float(duration.rescale(pq.ms)))
# read back all recorded spikes
spike_collector = {'exc': np.zeros(len(populations['exc']), dtype=object),
'inh': np.zeros(len(populations['inh']), dtype=object)}
for syn_type in ['exc', 'inh']:
for pop_index, pop in enumerate(populations[syn_type]):
spike_collector[syn_type][pop_index] = \
pop.get_data("spikes").segments[-1].spiketrains
# read back the membrane potential
mem_v = populations['exc'][0][[0]].get_data("v").segments[-1].irregularlysampledsignals[0]
return spike_collector, mem_v
Exercises
Adjusting weights
Tune the weights below to obtain a synfire chain behavior as seen in the figure above.
Which connection is the most sensitive one?
What happens if you disable inhibition?
Make comments in your lab book.
After executing the cells above, you can execute this cell as oft as you want.
synapse_weights = dict(
stim_exc=..., # int in range 0 - 63
stim_inh=..., # int in range 0 - 63
exc_exc=..., # int in range 0 - 63
exc_inh=..., # int in range 0 - 63
inh_exc=... # int in range -63 - 0
)
set_network_weights(weights=synapse_weights, projections=projs)
results = run(pops, 0.2 * pq.ms)
plot_data(pops, *results)
Adjusting the number of neurons per population
Reduce the number of neurons in each population and use the free neurons to increase the chain size. Which hardware feature limits the minimal number of neurons in each population? What is the maximal chain length that you can produce? Make comments in your lab book.
projs, pops = setup_network(
numb_pops=..., # chain length
pop_sizes={'exc': ..., 'inh': ...}) # size of each chain link
# we reuse the weights from the exercise above
set_network_weights(weights=synapse_weights, projections=projs)
results = run(pops, 0.2 * pq.ms)
plot_data(pops, *results)
Closing the loop
Close the loop from the last to the first population. Does the neurons still fire after the software has completed? Make comments in your lab book.
Hint: For this part it might be easier to switch to a smaller chain with larger populations.
Hint: Take a look into the code above. The closed loop is already implemented and just need to be activated.
projs, pops = setup_network(...)
set_network_weights(weights=..., projections=projs)
results = run(pops, 0.2 * pq.ms)
plot_data(pops, *results)