# Simulating the BrainScaleS hardware¶

The Executable System Specification (ESS) is a software model of the BrainScaleS hardware system. The ESS is implemented in C++/SystemC and contains functional models of all relevant units of the wafer-scale hardware. It is fully executable and resembles how neural experiments are run on the real wafer-scale system.

The ESS allows offline experimentation with the BrainScaleS system, and can be used for testing small models locally before submitting them to the hardware system, and for introspecting the system behaviour. The ESS supports PyNN versions 0.7 and 0.8.

Note, however, that the ESS runs much more slowly than the real hardware.

## Installation¶

### Using Docker¶

The simplest way to use the ESS is to use the (uhei/ess-system) Docker image. This has a recent version of the ESS already installed, and is updated on a regular basis.

#### Prerequisites¶

You need to have Docker installed. On Ubuntu Trusty 14.04 LTS the Docker package is called docker.io. Note that there is a package named docker in the Ubuntu repositories as well. It is something completely different.

After installing you may add yourself to the docker group. Otherwise you have to prepend sudo to most Docker commands.

You might also want to have a look at /etc/default/docker.io after installation, especially if you’re sitting behind a proxy. These proxy settings are not for the containers, but for communication with the Docker Repository (i.e. docker pull).

sudo apt-get install docker.io
# sudo adduser $USER docker # sudo editor /etc/default/docker.io # re-login  #### Download/upgrade of the uhei/ess-system image¶ The following step takes some time upon first execution, depending on your internet connection. Later updates should generally perform faster as only changes are pulled. sudo docker pull uhei/ess-system:14.04  #### Starting the ESS container¶ The execution of the downloaded image creates a new Docker container. Note that Docker containers are not persistent, but one can link a host directory for persistent user data into the container. The following docker run command does just that. The host directory specified by the VOLUME environment variable will be available as /bss/$USER within the container. If you’re interested in the option flags of the docker run command, run docker help run.

# docker help run
mkdir ess-data                # where to put user/persistent data
VOLUME="${PWD}/ess-data" sudo docker run --name ess-container --hostname ess-container \ -v "$VOLUME:/bss/$USER" -ti "uhei/ess-system:14.04" /bin/bash  #### Testing your ESS container installation¶ You should be in the container now, as root in the directory /bss (as in BrainScaleS). An ls should show your folder for persistent data (under your user name or whatever you put instead of $USER in the docker run command above), as well as the directories mappingtool_test, neurotools and tutorial.

To test your installation, you can run some unit tests. This is almost the same as with the installation below, just the mapping tool test is installed at another location:

# root@ess-container:/bss#
python mappingtool_test/regression/run_ess_tests.py
python $SYMAP2IC_PATH/components/systemsim/test/regression/run_ess_tests.py python$SYMAP2IC_PATH/components/systemsim/test/system/run_ess_tests.py


### Installing from source¶

Note

these instructions have been tested on a native Ubuntu saucy 13.10 on a 64-bit machine

#### Prerequisites¶

To be able to configure and compile the symap2ic project, you need to install the following libraries:

apt-get -y install git python-pip python-dev build-essential libgtest-dev \
libboost-all-dev libpng12-dev libssl-dev libmongo-client-dev mongodb \
liblog4cxx10-dev autotools-dev automake


The ESS expects the 64-bit libraries to lie either in /lib64 or /usr/lib64. However, in Ubuntu 13.10, the 64-bit libraries lie in /usr/lib/x86_64-linux-gnu. So, you need to make the following symbolic links:

ln -s /usr/lib/x86_64-linux-gnu /usr/lib64
ln -s /usr/lib/libmongoclient.a /usr/lib/x86_64-linux-gnu/libmongoclient.a


To be able to run the tests and to use the ESS, you also need to install:

apt-get -y install libgsl0-dev libncurses5-dev libreadline-dev gfortran \
libfreetype6-dev libblas-dev liblapack-dev r-base python-rpy \
pip install numpy scipy matplotlib PIL NeuroTools mpi4py xmlrunner


You should then install PyNN:

pip install PyNN  # PyNN 0.8


or

pip install PyNN==0.7.5  # PyNN 0.7


#### Installation of the ESS¶

You should first obtain an account from the heidelberg group. Then, on your computer, you generate a rsa key:

ssh-keygen -t rsa


Suppose that you have saved the key in the file ~/.ssh/id_rsa. In the heidelberg website, you go to ‘My account’ (upper-right). You click on ‘Public Key’ in the upper-right corner. You click on ‘New value’ and paste the content of your computer’s id_rsa.pub. Wait until the activation is done.

cd
git clone git@brainscales-r.kip.uni-heidelberg.de:symap2ic.git
cd symap2ic
source bootstrap.sh.UHEI .


For PyNN 0.8:

./waf set_config systemsim-pynn8
./waf update


For PyNN 0.7:

For PyNN 0.8:

./waf set_config systemsim


If you have had problems in the execution of the 4 lines above, you have some read access right problems from the repositories. Please e-mail neuromorphic@humanbrainproject.eu. Please now go on by configuring and installing the system:

./waf configure --stage=brainscales --use-systemsim --without-hardware \
--prefix=$HOME/symap2ic ./waf install  You now set the environment variables: echo 'export SYMAP2IC_PATH=$HOME/symap2ic' >> ~/.bashrc
echo 'export PYTHONPATH=$PYTHONPATH:$SYMAP2IC_PATH/lib' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$SYMAP2IC_PATH/lib' >> ~/.bashrc
bash


For PyNN 0.7, you need to copy the PyNN hardware directory into the PyNN package:

cd $SYMAP2IC_PATH cp -r components/pynnhw/misc/pyNN_hardware_patch/hardware \ /usr/local/lib/python2.7/dist-packages/pyNN/  You can now test that the hardware backend is accessible: python -c 'import pyNN.hardware.brainscales as sim'  To test your installation with PyNN 0.7, you can run some unit tests: python$SYMAP2IC_PATH/components/mappingtool/test/regression/run_ess_tests.py
python $SYMAP2IC_PATH/components/systemsim/test/regression/run_ess_tests.py python$SYMAP2IC_PATH/components/systemsim/test/system/run_ess_tests.py


To test your installation with PyNN 0.8, you can run the PyNN unit and system tests:

cd ~/PyNN-8/test
cd unittests/backends
nosetests test_mock.py
nosetests test_hardware_brainscales.py


## Using the ESS¶

Scripts to run on the ESS should in general be identical to those that run on the BrainScaleS hardware. The only required difference is to choose PyMarocco.ESS as the PyMarocco backend.

### Example¶

A full example where an Adaptive-Exponential Integrate & Fire neuron is stimulated by external spikes, is shown in nmpm1_adex_neuron_ess.py:

#!/usr/bin/env python

"""
Example Script for simulation of an AdEx neuron on the ESS

Note: Neuron and synapse parameters are chosen to be within the parameter ranges of
the default calibration.
"""

import pyhmf as pynn
#import pyNN.nest as pynn
from pymarocco import PyMarocco, Defects
import pylogging
import Coordinate as C
import pysthal

# configure logging
pylogging.reset()
pylogging.default_config(level=pylogging.LogLevel.INFO,
fname="logfile.txt",
dual=False)

# Mapping config
marocco = PyMarocco()
marocco.backend = PyMarocco.ESS # choose Executable System Specification instead of real hardware
marocco.calib_backend = PyMarocco.CalibBackend.Default
marocco.defects.backend = Defects.Backend.None
marocco.neuron_placement.skip_hicanns_without_neuron_blacklisting(False)
marocco.hicann_configurator = pysthal.HICANNConfigurator()
marocco.experiment_time_offset = 5.e-7 # can be low for ESS, as no repeater locking required
marocco.neuron_placement.default_neuron_size(4) # default number of hardware neuron circuits per pyNN neuron
marocco.param_trafo.use_big_capacitors = False

# set-up the simulator
pynn.setup(marocco=marocco)

neuron_count = 1 # size of the Population we will create

# Set the neuron model class
neuron_model = pynn.EIF_cond_exp_isfa_ista # an Adaptive Exponential I&F Neuron

neuron_parameters = {
'a'          : 4.0,    # adaptation variable a in nS
'b'          : 0.0805, # adaptation variable b in pA
'cm'         : 0.281,  # membrane capacitance nF
'delta_T'    : 1.0,    # delta_T fom Adex mod in mV, determines the sharpness of spike initiation
'e_rev_E'    : 0.0,    # excitatory reversal potential in mV
'e_rev_I'    : -80.0,  # inhibitory reversal potential in mV
'i_offset'   : 0.0,    # offset current
'tau_m'      : 9.3667, # membrane time constant
'tau_refrac' : 0.2,    # absolute refractory period
'tau_syn_E'  : 20.0,   # excitatory synaptic time constant
'tau_syn_I'  : 20.0,   # inhibitory synaptic time constant
'tau_w'      : 144.0,  # adaptation time constant
'v_reset'    : -70.6,  # reset potential in mV
'v_rest'     : -70.6,  # resting potential in mV
'v_spike'    : -40.0,  # spike detection voltage in mV
'v_thresh'   : -50.4,  # spike initiaton threshold voltage in mV
}

# We create a Population with 1 neuron of our neuron model
N1 = pynn.Population(size=neuron_count, cellclass=neuron_model, cellparams=neuron_parameters)

# A spike source array with spike times given in a list
spktimes = [10., 50., 65., 89., 233., 245.,255.,345.,444.4]
spike_source = pynn.Population(1, pynn.SpikeSourceArray, {'spike_times':spktimes})

# Connect the Spike source to our neuron
pynn.Projection(spike_source, N1, pynn.OneToOneConnector(weights=0.0138445), target='excitatory')

# record the membrane voltage of all neurons of the population
N1.record_v()
# record the spikes of all neurons of the population
N1.record()

# run the simulation for 500 ms
duration = 500.
pynn.run(duration)

# After the simulation, we get Spikes
spike_times = N1.getSpikes()
for pair in spike_times:
print "Neuron ", int(pair[0]), " spiked at ", pair[1]

# Plot voltage
do_plot = False
if do_plot:
import pylab
v = N1.get_v()[:,1:3] # strip ID
pylab.plot(v[:,0], v[:,1])
pylab.xlabel("Time [ms]")
pylab.ylabel("Voltage [mV]")
pylab.xlim(0,duration)
pylab.show()

# clean up pyNN
pynn.end()


Note that the same script runs also with pyNN.nest, just change the first line that imports the PyNN backend.

### ESS Config¶

In addition, one can specify an ESS configuration as follows:

import pysthal

marocco = PyMarocco()
marocco.backend = PyMarocco.ESS

ess_config = pysthal.ESSConfig()
ess_config.enable_weight_distortion = True
ess_config.weight_distortion = 0.2
ess_config.pulse_statistics_file = "pulse_stats.py"

marocco.ess_config = ess_config


parameters of ESSConfig:

enable_weight_distortion

Enables the distortion of synaptic weights in the virtual hardware system.

This option can be used to resemble the fixed pattern noise of synaptic weights on the real hardware.

Default: False

weight_distortion

Specifies the distortion of synaptic weights in the virtual hardware system.

This parameter defines the fraction of the original value, that is used as the standard deviation for randomizing the weight according to a normal distribution around the original value. All weights are clipped to positive values.

Default: 0.0

pulse_statistics_file

Name of file to which the ESS pulse statistics are written.

See Pulse Loss Statistics for details.

Default: ""

### Pulse Loss Statistics¶

The ESS allows to count all spikes that were lost in any place of the virtual hardware system. Spikes are mostly lost in the off-wafer communication network (also called ‘’Layer 2 network’‘) that connects the wafer to the host PC. In the Layer 2 network pulse loss can happen on two routes:

1. Stimulation: not all spikes from the spike sources (SpikeSourcePoisson or SpikeSourceArray) are delivered to its targets, because the bandwidth in the off-wafer network is limited. When a spike is lost, it is lost for its targets.
2. Recording: For the same bandwidth constraints in the off-wafer network, some spikes of real neurons can be lost on the route from the wafer to the FGPGAs, Hence, in the received spike data some events are missing. However, the ‘non-recorded’ spikes did reach their target neurons on the wafer.

Spikes can also be lost on the wafer, but only in rare cases when many neuron located on the same HICANN fire synchronously.

1. On-wafer Spike Loss: This is the case of pulses lost in the on-wafer pulse-communication system (also called Layer 1 network). If this happens, spikes are completely deleted, and reach no other neuron.
2. Spike Drop before Simulation: The playback module of the FPGA, which plays back the stimuli pulses at given times, also has a limited bandwidth. This limitation is considered beforehand, such that spikes are dropped even before the simulation, in order to avoid a further delaying of many more spikes during an experiment.

The ESS counts the lost and sent pulses. After the simulation, you will see something in the log for a loglevel>=2:

INFO  Default *************************************
INFO  Default LostEventLogger::summary
INFO  Default Layer 2 events dropped before sim : 837/3939 (21.249 %)
INFO  Default Layer 2 events lost :               243/3199 (7.59612 %)
INFO  Default Layer 2 events lost downwards :     243/3102 (7.83366 %)
INFO  Default Layer 2 events lost upwards   :     0/97 (0 %)
INFO  Default Layer 1 events lost : 0/79 (0 %)
INFO  Default *************************************


You can specify to get this data by specifying a file pulse_statistics_file in the ESS Config:

marocco.ess_config.pulse_statistics_file = "pulse_stats.py"
sim.setup(marocco=marocco)


Then the pulse statistics file contains a Python dictionary pulse_statistics which can be use for further processing:

pulse_statistics = {
'l2_down_before_sim': 3939,
'l2_down_dropped_before_sim': 837,
'l2_down_sent': 3102,
'l2_down_lost': 243,
'l2_up_sent': 97,
'l2_up_lost': 0,
'l1_neuron_sent': 79,
'l1_neuron_lost': 0,
}