hxtorch.spiking.functional.iaf
Integrate and fire neurons
Classes
|
Parameters for IAF integration and backward path |
|
Typed version of namedtuple. |
|
Unterjubel hardware observables to allow correct gradient flow |
Functions
-
hxtorch.spiking.functional.iaf.
cuba_iaf_integration
(input: torch.Tensor, params: NamedTuple, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06) → Tuple[torch.Tensor, torch.Tensor] Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:
v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = v_reset if z^{t+1} == 1
Assumes i^0, v^0 = 0., v_reset :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIFParams object holding neuron parameters. :return: Returns the spike trains in shape and membrane trace as a tuple.
Both tensors are of shape (batch, time, neurons).
-
hxtorch.spiking.functional.iaf.
cuba_refractory_iaf_integration
(input: torch.Tensor, params: NamedTuple, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06) → Tuple[torch.Tensor, torch.Tensor] Integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses and refractory period. Integrates according to:
v^{t+1} = dt / au_{men} * i^t + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1 or ref^t > 0 ref^{t+1} -= 1 ref^{t+1} = params.tau_ref if z^{t+1} == 1
Assumes i^0, v^0 = 0., v_reset :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIFParams object holding neuron parameters. :return: Returns the spike trains in shape and membrane trace as a tuple.
Both tensors are of shape (batch, time, neurons).
-
hxtorch.spiking.functional.iaf.
iaf_step
(z: torch.Tensor, v: torch.Tensor, i: torch.Tensor, input: torch.Tensor, z_hw: torch.Tensor, v_hw: torch.Tensor, params: NamedTuple, dt: float) → Tuple[torch.Tensor, …] Integrate the membrane of a neurons one time step further according to the integrate and fire dynamics. :param z: The spike tensor at time step t. :param v: The membrane tensor at time step t. :param i: The current tensor at time step t. :param input: The input tensor at time step t (graded spikes). :param z_hw: The hardware spikes corresponding to the current time step. In
case this is None, no HW spikes will be injected.
- Parameters
v_hw – The hardware cadc traces corresponding to the current time step. In case this is None, no HW cadc values will be injected.
params – Parameter object holding the LIF parameters.
dt – Integration step width.
- Returns
Returns a tuple (z, v, i) holding the tensors of time step t + 1.
-
hxtorch.spiking.functional.iaf.
refractory_update
(z: torch.Tensor, v: torch.Tensor, ref_state: torch.Tensor, params: NamedTuple, dt: float = 1e-06) → Tuple[torch.Tensor, …] Update neuron membrane and spikes to account for refractory period. This implemention is widly adopted from: https://github.com/norse/norse/blob/main/norse/torch/functional/lif_refrac.py :param z: The spike tensor at time step t. :param v: The membrane tensor at time step t. :param ref_state: The refractory state holding the number of time steps the
neurons has to remain in the refractory period.
- Parameters
params – Parameter object holding the LIF parameters.
- Returns
Returns a tuple (z, v, ref_state) holding the tensors of time step t.
-
hxtorch.spiking.functional.iaf.
threshold
(input: torch.Tensor, method: str, alpha: float) → torch.Tensor Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function.
Currently supported: ‘super_spike’.
- Parameters
alpha – Parameter controlling the slope of the surrogate derivative in case of ‘superspike’.
- Returns
Returns the tensor of the threshold function.