hxtorch.spiking.functional

Modules

hxtorch.spiking.functional.dropout

Custom BatchDropout function

hxtorch.spiking.functional.eventprop

hxtorch.spiking.functional.iaf

Integrate and fire neurons

hxtorch.spiking.functional.li

Leaky-integrate neurons

hxtorch.spiking.functional.lif

Leaky-integrate and fire neurons

hxtorch.spiking.functional.linear(input, weight)

Wrap linear to allow signature inspection

hxtorch.spiking.functional.refractory

Refractory update for neurons with refractory behaviour

hxtorch.spiking.functional.spike_source

Define different input spike sources

hxtorch.spiking.functional.superspike

Surrograte gradient for SuperSpike.

hxtorch.spiking.functional.threshold(input, …)

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function. Currently supported: ‘super_spike’. :param alpha: Parameter controlling the slope of the surrogate derivative in case of ‘superspike’. :return: Returns the tensor of the threshold function.

hxtorch.spiking.functional.unterjubel

Autograd function to ‘unterjubel’ (german for ‘inject’) hardware observables and allow correct gradient back-propagation.

Classes

EventPropNeuronFunction(*args, **kwargs)

Define gradient using adjoint code (EventProp) from norse

EventPropSynapseFunction(*args, **kwargs)

Synapse function for proper gradient transport when using EventPropNeuron.

SuperSpike(*args, **kwargs)

Define Surrogate Gradient ‘SuperSpike’ (negative side of Fast Sigmoid) See: https://arxiv.org/abs/1705.11146

Functions

hxtorch.spiking.functional.batch_dropout(input: torch.Tensor, mask: torch.Tensor)torch.Tensor

Applies a dropout mask to a batch of inputs.

Parameters
  • input – The input tensor to apply dropout to.

  • mask – The dropout mask. Entires in the mask which are False will disable their corresponding entry in input.

Returns

The input tensor with dropout mask applied.

hxtorch.spiking.functional.cuba_iaf_integration(input: torch.Tensor, *, reset: torch.Tensor, threshold: torch.Tensor, tau_syn: torch.Tensor, tau_mem: torch.Tensor, method: torch.Tensor, alpha: torch.Tensor, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, torch.Tensor]

Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:

v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.threshold v^{t+1} = v_reset if z^{t+1} == 1

Assumes i^0, v^0 = 0., params.reset :note: One dt synaptic delay between input and output

Parameters
  • input – Input tensor holding ‘graded_spikes’ in shape (batch, time, neurons).

  • reset – The reset voltage as torch.Tensor.

  • threshold – The threshold voltage as torch.Tensor.

  • tau_syn – The synaptic time constant as torch.Tensor.

  • tau_mem – The membrane time constant as torch.Tensor.

  • method – The method used for the surrogate gradient, e.g., ‘superspike’.

  • alpha – The slope of the surrogate gradient in case of ‘superspike’.

  • hw_data – An optional tuple holding optional hardware observables in the order (spikes, membrane_cadc, membrane_madc).

  • dt – Integration step width.

Returns

Returns tuple of tensors with membrane traces, spikes and synaptic current. Tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_li_integration(input: torch.Tensor, *, leak: torch.Tensor, tau_syn: torch.Tensor, tau_mem: torch.Tensor, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)torch.Tensor

Leaky-integrate neuron integration for realization of readout neurons with exponential synapses. Integrates according to:

v^{t+1} = dt / au_{mem} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t

Assumes i^0, v^0 = 0. :note: One dt synaptic delay between input and output

Parameters
  • input – Input graded spike tensor of shape (batch, time, neurons).

  • leak – The leak voltage as torch.Tensor.

  • tau_syn – The synaptic time constant as torch.Tensor.

  • tau_mem – The membrane time constant as torch.Tensor.

  • hw_data – An optional tuple holding optional hardware observables in the order (None, membrane_cadc, membrane_madc).

  • dt – Integration step width

Returns

Returns the membrane trace in shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_lif_integration(input: torch.Tensor, *, leak: torch.Tensor, reset: torch.Tensor, threshold: torch.Tensor, tau_syn: torch.Tensor, tau_mem: torch.Tensor, method: torch.Tensor, alpha: torch.Tensor, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, ]

Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:

i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t z^{t+1} = 1 if v^{t+1} > params.threshold v^{t+1} = params.reset if z^{t+1} == 1

Assumes i^0, v^0 = 0, v_leak :note: One dt synaptic delay between input and output

TODO: Issue 3992

Parameters
  • input – Tensor holding ‘graded_spikes’ in shape (batch, time, neurons).

  • leak – The leak voltage as torch.Tensor.

  • reset – The reset voltage as torch.Tensor.

  • threshold – The threshold voltage as torch.Tensor.

  • tau_syn – The synaptic time constant as torch.Tensor.

  • tau_mem – The membrane time constant as torch.Tensor.

  • method – The method used for the surrogate gradient, e.g., ‘superspike’.

  • alpha – The slope of the surrogate gradient in case of ‘superspike’.

  • hw_data – An optional tuple holding optional hardware observables in the order (spikes, membrane_cadc, membrane_madc).

  • dt – Integration step width.

Returns

Returns tuple holding tensors with membrane traces, spikes and synaptic current. Tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_refractory_iaf_integration(input: torch.Tensor, *, reset: torch.Tensor, threshold: torch.Tensor, tau_syn: torch.Tensor, tau_mem: torch.Tensor, refractory_time: torch.Tensor, method: torch.Tensor, alpha: torch.Tensor, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, torch.Tensor]

Integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses and refractory period. Integrates according to:

v^{t+1} = dt / au_{men} * i^t + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1 or ref^t > 0 ref^{t+1} -= 1 ref^{t+1} = params.tau_ref if z^{t+1} == 1

Assumes i^0, v^0 = 0., v_reset :note: One dt synaptic delay between input and output

Parameters
  • input – SynapseHandle holding graded_spikes in shape (batch, time, neurons).

  • reset – The reset voltage as torch.Tensor.

  • threshold – The threshold voltage as torch.Tensor.

  • tau_syn – The synaptic time constant as torch.Tensor.

  • tau_mem – The membrane time constant as torch.Tensor.

  • refractory_time – The refractory time constant as torch.Tensor.

  • method – The method used for the surrogate gradient, e.g., ‘superspike’.

  • alpha – The slope of the surrogate gradient in case of ‘superspike’.

  • hw_data – An optional tuple holding optional hardware observables in the order (spikes, membrane_cadc, membrane_madc).

  • dt – Integration step width.

Returns

Returns NeuronHandle holding tensors with membrane traces, spikes and synaptic current. Tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_refractory_lif_integration(input: torch.Tensor, *, leak: torch.Tensor, reset: torch.Tensor, threshold: torch.Tensor, tau_syn: torch.Tensor, tau_mem: torch.Tensor, refractory_time: torch.Tensor, method: torch.Tensor, alpha: torch.Tensor, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, ]

Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses and refractory period.

Integrates according to:

i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / au_{men} * (v_l - v^t + i^{t+1}) + v^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1 or ref^{t+1} > 0 ref^{t+1} = params.tau_ref ref^{t+1} -= 1

Assumes i^0, v^0 = 0.

Parameters
  • input – Tensor holding ‘graded_spikes’ in shape (batch, time, neurons).

  • leak – The leak voltage as torch.Tensor.

  • reset – The reset voltage as torch.Tensor.

  • threshold – The threshold voltage as torch.Tensor.

  • tau_syn – The synaptic time constant as torch.Tensor.

  • tau_mem – The membrane time constant as torch.Tensor.

  • refractory_time – The refractory time constant as torch.Tensor.

  • method – The method used for the surrogate gradient, e.g., ‘superspike’.

  • alpha – The slope of the surrogate gradient in case of ‘superspike’.

  • hw_data – An optional tuple holding optional hardware observables in the order (spikes, membrane_cadc, membrane_madc).

  • dt – Integration step width.

Returns

Returns tuple holding tensors with membrane traces, spikes and synaptic current. Tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.input_neuron(input: torch.Tensor, hw_data: Optional[torch.Tensor] = None)hxtorch.spiking.handle.NeuronHandle

Input neuron, forwards spikes without modification in non-hardware runs but injects loop-back recorded spikes if available.

Parameters
  • input – Input spike tensor.

  • hw_data – Loop-back spikes, if available.

Returns

Returns the input spike tensor.

hxtorch.spiking.functional.linear(input: torch.Tensor, weight: torch.nn.parameter.Parameter, bias: Optional[torch.nn.parameter.Parameter] = None)torch.Tensor

Wrap linear to allow signature inspection

hxtorch.spiking.functional.linear_exponential_clamp(inputs: torch.Tensor, weight: torch.nn.parameter.Parameter, bias: Optional[torch.nn.parameter.Parameter] = None, cap: float = 1.5, start_weight: float = 61.0, quantize: bool = False)torch.Tensor

Clamps the weights with an exponential roll-off towards saturation.

Parameters
  • input – The input neuron tensor holding spikes to be multiplied with the params tensor weight.

  • weight – Weight Tensor to be clamped.

  • bias – The bias of the linear operation.

  • cap – Upper resp. -1 * lower boundary of the weights. Choose this value to be 1 / weight_scaling (see hxtorch.spiking.Synapse) to saturate the software weights where theirs scaled values saturate on hardware.

  • start_weight – Indicating at which hardware-weight the roll off begins. Has to be in range (0, 63).

  • quantize – If true, the weights are rounded to multiples of cap / 63 to match the discrete hardware representation.

Returns

Clamped weights and possibly rounded weights

hxtorch.spiking.functional.linear_sparse(input: torch.Tensor, weight: torch.nn.parameter.Parameter, connections: Optional[torch.Tensor] = None, bias: Optional[torch.nn.parameter.Parameter] = None)torch.Tensor

Wrap linear to allow signature inspection. Disable inactive connections in weight tensor.

Parameters
  • input – The input neuron tensor holding spikes to be multiplied with the params tensor weight.

  • weight – The weight parameter tensor. This tensor is expected to be dense since pytorch, see issue: 4039.

  • bias – The bias of the linear operation.

  • connections – A dense boolean connection mask indicating active connections. If None, the weight tensor remains untouched.

hxtorch.spiking.functional.threshold(input: torch.Tensor, method: str, alpha: float)torch.Tensor

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function.

Currently supported: ‘super_spike’.

Parameters

alpha – Parameter controlling the slope of the surrogate derivative in case of ‘superspike’.

Returns

Returns the tensor of the threshold function.