hxtorch.spiking.functional.aelif

Adaptive exponential leaky-integrate and fire neurons

Classes

Handle(*args, **kwargs)

Factory for classes which are to be used as custom handles for observable data, depending on the specific observables a module deals with.

Unterjubel(*args, **kwargs)

Unterjubel hardware observables to allow correct gradient flow

Functions

hxtorch.spiking.functional.aelif.cuba_aelif_integration(input: Union[Tuple[torch.Tensor], torch.Tensor], *, leak: Union[torch.Tensor, float, int], reset: Union[torch.Tensor, float, int], threshold: Union[torch.Tensor, float, int], tau_syn: Union[torch.Tensor, float, int], c_mem: Union[torch.Tensor, float, int], g_l: Union[torch.Tensor, float, int], refractory_time: Union[torch.Tensor, float, int], method: str, alpha: float, exp_slope: Union[torch.Tensor, float, int], exp_threshold: Union[torch.Tensor, float, int], subthreshold_adaptation_strength: Union[torch.Tensor, float, int], spike_triggered_adaptation_increment: Union[torch.Tensor, float, int], tau_adap: Union[torch.Tensor, float, int], hw_data: Optional[Tuple[Optional[torch.Tensor, None]], None] = None, dt: float = 1e-06, leaky: bool = True, fire: bool = True, refractory: bool = False, exponential: bool = False, subthreshold_adaptation: bool = False, spike_triggered_adaptation: bool = False, integration_step_code: str)

Adaptive exponential leaky-integrate and fire neuron integration for realization of AdEx neurons with exponential synapses. Certain terms of the differential equations of the membrane voltage v and the adaptation current w can be disabled or enabled via flags.

If all flags are set, it integrates according to:

i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / c_{mem} * (g_l * (v_l - v^t + T * exp((v^t - v_T) / T))

  • i^t - w^t) + v^t

z^{t+1} = 1 if v^{t+1} > params.threshold w^{t+1} = w^t + dt / au_{adap} * (a * (v^{t+1} - v_l) - w^t)

  • b * z^{t+1}

v^{t+1} = params.reset if z^{t+1} == 1

Assumes i^0, v^0 = v_leak, if leak term is enabled, else v^0 = 0 and w^0 = 0. :note: One dt synaptic delay between input and output

Parameters
  • input – torch.Tensor holding ‘graded_spikes’ in shape (batch, time, neurons) or tuple which holds one of such tensors for each input synapse.

  • leak – The leak voltage.

  • reset – The reset voltage.

  • threshold – The threshold voltage.

  • tau_syn – The synaptic time constant.

  • c_mem – The membrane capacitance.

  • g_l – The leak conductance.

  • refractory_time – The refractory time constant.

  • method – The method used for the surrogate gradient, e.g., ‘superspike’.

  • alpha – The slope of the surrogate gradient in case of ‘superspike’.

  • exp_slope – The exponential slope.

  • exp_threshold – The exponential threshold.

  • subthreshold_adaptation_strength – The subthreshold adaptation strength.

  • spike_triggered_adaptation_increment – The spike-triggered adaptation increment.

  • tau_adap – The adaptive time constant.

  • hw_data – An optional tuple holding optional hardware observables in the order (spikes, membrane_cadc, membrane_madc).

  • dt – Integration step width.

  • leaky – Flag that enables / disables the leak term when set to true / false

  • fire – Flag that enables / disables firing behaviour when set to true / false.

  • refractory – Flag used to omit the execution of the refractory update in case the refractory time is set to zero.

  • exponential – Flag that enables / disables the exponential term in the differential equation for the membrane potential when set to true / false.

  • subthreshold_adaptation – Flag that enables / disables the subthreshold adaptation term in the differential equation of the adaptation when set to true / false.

  • spike_triggered_adaptation – Flag that enables / disables spike-triggered adaptation when set to true / false.

Returns

Returns tuple holding tensors with spikes, membrane traces, adaptation current and synaptic current. Tensors are of shape (time, batch, neurons).

hxtorch.spiking.functional.aelif.refractory_update(z: torch.Tensor, v: torch.Tensor, ref_state: torch._VariableFunctionsClass.tensor, spikes_hw: torch.Tensor, membrane_hw: torch.Tensor, *, reset: torch.Tensor, refractory_time: torch.Tensor, dt: float)Tuple[torch.Tensor, ]

Update neuron membrane and spikes to account for refractory period. This implemention is widly adopted from: https://github.com/norse/norse/blob/main/norse/torch/functional/lif_refrac.py

Parameters
  • z – The spike tensor at time step t.

  • v – The membrane tensor at time step t.

  • ref_state – The refractory state holding the number of time steps the neurons has to remain in the refractory period.

  • spikes_hw – The hardware spikes corresponding to the current time step. In case this is None, no HW spikes will be injected.

  • membrnae_hw – The hardware CADC traces corresponding to the current time step. In case this is None, no HW CADC values will be injected.

  • reset – The reset voltage as torch.Tensor.

  • refractory_time – The refractory time constant as torch.Tensor.

  • dt – Integration step width.

Returns

Returns a tuple (z, v, ref_state) holding the tensors of time step t.

hxtorch.spiking.functional.aelif.spiking_threshold(input: torch.Tensor, method: str, alpha: float)torch.Tensor

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function.

Currently supported: ‘super_spike’.

Parameters

alpha – Parameter controlling the slope of the surrogate derivative in case of ‘superspike’.

Returns

Returns the tensor of the threshold function.