hxtorch.spiking.functional

Modules

hxtorch.spiking.functional.aelif

Adaptive exponential leaky-integrate and fire neurons

hxtorch.spiking.functional.dropout

Custom BatchDropout function

hxtorch.spiking.functional.eventprop

hxtorch.spiking.functional.li

Leaky-integrate neurons

hxtorch.spiking.functional.lif

Leaky-integrate and fire neurons

hxtorch.spiking.functional.linear(input, weight)

Wrap linear to allow signature inspection

hxtorch.spiking.functional.refractory

Refractory update for neurons with refractory behaviour

hxtorch.spiking.functional.spike_source

Define different input spike sources

hxtorch.spiking.functional.step_integration_code_factory

hxtorch.spiking.functional.superspike

Surrograte gradient for SuperSpike.

hxtorch.spiking.functional.threshold(input, …)

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function. Currently supported: ‘super_spike’. :param alpha: Parameter controlling the slope of the surrogate derivative in case of ‘superspike’. :return: Returns the tensor of the threshold function.

hxtorch.spiking.functional.unterjubel

Autograd function to ‘unterjubel’ (german for ‘inject’) hardware observables and allow correct gradient back-propagation.

Classes

CuBaStepCode(leaky, fire, refractory, …)

EventPropLIFFunction(*args, **kwargs)

Define gradient using adjoint code (EventProp) from norse

EventPropSynapseFunction(*args, **kwargs)

Synapse function for proper gradient transport when using EventPropLIF.

SuperSpike(*args, **kwargs)

Define Surrogate Gradient ‘SuperSpike’ (negative side of Fast Sigmoid) See: https://arxiv.org/abs/1705.11146

Functions

hxtorch.spiking.functional.batch_dropout(input: torch.Tensor, mask: torch.Tensor)torch.Tensor

Applies a dropout mask to a batch of inputs.

Parameters
  • input – The input tensor to apply dropout to.

  • mask – The dropout mask. Entires in the mask which are False will disable their corresponding entry in input.

Returns

The input tensor with dropout mask applied.

hxtorch.spiking.functional.cuba_aelif_integration(input: Union[Tuple[torch.Tensor], torch.Tensor], *, leak: Union[torch.Tensor, float, int], reset: Union[torch.Tensor, float, int], threshold: Union[torch.Tensor, float, int], tau_syn: Union[torch.Tensor, float, int], c_mem: Union[torch.Tensor, float, int], g_l: Union[torch.Tensor, float, int], refractory_time: Union[torch.Tensor, float, int], method: str, alpha: float, exp_slope: Union[torch.Tensor, float, int], exp_threshold: Union[torch.Tensor, float, int], subthreshold_adaptation_strength: Union[torch.Tensor, float, int], spike_triggered_adaptation_increment: Union[torch.Tensor, float, int], tau_adap: Union[torch.Tensor, float, int], hw_data: Optional[Tuple[Optional[torch.Tensor, None]], None] = None, dt: float = 1e-06, leaky: bool = True, fire: bool = True, refractory: bool = False, exponential: bool = False, subthreshold_adaptation: bool = False, spike_triggered_adaptation: bool = False, integration_step_code: str)

Adaptive exponential leaky-integrate and fire neuron integration for realization of AdEx neurons with exponential synapses. Certain terms of the differential equations of the membrane voltage v and the adaptation current w can be disabled or enabled via flags.

If all flags are set, it integrates according to:

i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / c_{mem} * (g_l * (v_l - v^t + T * exp((v^t - v_T) / T))

  • i^t - w^t) + v^t

z^{t+1} = 1 if v^{t+1} > params.threshold w^{t+1} = w^t + dt / au_{adap} * (a * (v^{t+1} - v_l) - w^t)

  • b * z^{t+1}

v^{t+1} = params.reset if z^{t+1} == 1

Assumes i^0, v^0 = v_leak, if leak term is enabled, else v^0 = 0 and w^0 = 0. :note: One dt synaptic delay between input and output

Parameters
  • input – torch.Tensor holding ‘graded_spikes’ in shape (batch, time, neurons) or tuple which holds one of such tensors for each input synapse.

  • leak – The leak voltage.

  • reset – The reset voltage.

  • threshold – The threshold voltage.

  • tau_syn – The synaptic time constant.

  • c_mem – The membrane capacitance.

  • g_l – The leak conductance.

  • refractory_time – The refractory time constant.

  • method – The method used for the surrogate gradient, e.g., ‘superspike’.

  • alpha – The slope of the surrogate gradient in case of ‘superspike’.

  • exp_slope – The exponential slope.

  • exp_threshold – The exponential threshold.

  • subthreshold_adaptation_strength – The subthreshold adaptation strength.

  • spike_triggered_adaptation_increment – The spike-triggered adaptation increment.

  • tau_adap – The adaptive time constant.

  • hw_data – An optional tuple holding optional hardware observables in the order (spikes, membrane_cadc, membrane_madc).

  • dt – Integration step width.

  • leaky – Flag that enables / disables the leak term when set to true / false

  • fire – Flag that enables / disables firing behaviour when set to true / false.

  • refractory – Flag used to omit the execution of the refractory update in case the refractory time is set to zero.

  • exponential – Flag that enables / disables the exponential term in the differential equation for the membrane potential when set to true / false.

  • subthreshold_adaptation – Flag that enables / disables the subthreshold adaptation term in the differential equation of the adaptation when set to true / false.

  • spike_triggered_adaptation – Flag that enables / disables spike-triggered adaptation when set to true / false.

Returns

Returns tuple holding tensors with spikes, membrane traces, adaptation current and synaptic current. Tensors are of shape (time, batch, neurons).

hxtorch.spiking.functional.exp_cuba_li_integration(input: torch.Tensor, *, leak: torch.Tensor, tau_syn_exp: torch.Tensor, tau_mem_exp: torch.Tensor, hw_data: Optional[torch.Tensor, None] = None)torch.Tensor
hxtorch.spiking.functional.exp_cuba_lif_integration(input: torch.Tensor, *, leak: torch.Tensor, reset: torch.Tensor, threshold: torch.Tensor, tau_syn_exp: torch.Tensor, tau_mem_exp: torch.Tensor, method: torch.Tensor, alpha: torch.Tensor, hw_data: Optional[torch.Tensor, None] = None)Tuple[torch.Tensor, ]
hxtorch.spiking.functional.input_neuron(input: torch.Tensor, hw_data: Optional[torch.Tensor, None] = None)types.Handle_current_membrane_cadc_membrane_madc_spikes

Input neuron, forwards spikes without modification in non-hardware runs but injects loop-back recorded spikes if available.

Parameters
  • input – Input spike tensor.

  • hw_data – Loop-back spikes, if available.

Returns

Returns the input spike tensor.

hxtorch.spiking.functional.linear(input: torch.Tensor, weight: torch.nn.parameter.Parameter, bias: torch.nn.parameter.Parameter = None)torch.Tensor

Wrap linear to allow signature inspection

hxtorch.spiking.functional.linear_exponential_clamp(inputs: torch.Tensor, weight: torch.nn.parameter.Parameter, bias: torch.nn.parameter.Parameter = None, cap: float = 1.5, start_weight: float = 61.0, quantize: bool = False)torch.Tensor

Clamps the weights with an exponential roll-off towards saturation.

Parameters
  • input – The input neuron tensor holding spikes to be multiplied with the params tensor weight.

  • weight – Weight Tensor to be clamped.

  • bias – The bias of the linear operation.

  • cap – Upper resp. -1 * lower boundary of the weights. Choose this value to be 1 / weight_scaling (see hxtorch.spiking.Synapse) to saturate the software weights where theirs scaled values saturate on hardware.

  • start_weight – Indicating at which hardware-weight the roll off begins. Has to be in range (0, 63).

  • quantize – If true, the weights are rounded to multiples of cap / 63 to match the discrete hardware representation.

Returns

Clamped weights and possibly rounded weights

hxtorch.spiking.functional.linear_sparse(input: torch.Tensor, weight: torch.nn.parameter.Parameter, connections: torch.Tensor = None, bias: torch.nn.parameter.Parameter = None)torch.Tensor

Wrap linear to allow signature inspection. Disable inactive connections in weight tensor.

Parameters
  • input – The input neuron tensor holding spikes to be multiplied with the params tensor weight.

  • weight – The weight parameter tensor. This tensor is expected to be dense since pytorch, see issue: 4039.

  • bias – The bias of the linear operation.

  • connections – A dense boolean connection mask indicating active connections. If None, the weight tensor remains untouched.

hxtorch.spiking.functional.threshold(input: torch.Tensor, method: str, alpha: float)torch.Tensor

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function.

Currently supported: ‘super_spike’.

Parameters

alpha – Parameter controlling the slope of the surrogate derivative in case of ‘superspike’.

Returns

Returns the tensor of the threshold function.