hxtorch.spiking.functional

Modules

hxtorch.spiking.functional.dropout

Custom BatchDropout function

hxtorch.spiking.functional.eventprop

hxtorch.spiking.functional.iaf

Integrate and fire neurons

hxtorch.spiking.functional.li

Leaky-integrate neurons

hxtorch.spiking.functional.lif

Leaky-integrate and fire neurons

hxtorch.spiking.functional.linear(input, weight)

Wrap linear to allow signature inspection

hxtorch.spiking.functional.refractory

Refractory update for neurons with refractory behaviour

hxtorch.spiking.functional.spike_source

Define different input spike sources

hxtorch.spiking.functional.superspike

Surrograte gradient for SuperSpike.

hxtorch.spiking.functional.threshold(input, …)

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function. Currently supported: ‘super_spike’. :param alpha: Parameter controlling the slope of the surrogate derivative in case of ‘superspike’. :return: Returns the tensor of the threshold function.

hxtorch.spiking.functional.unterjubel

Autograd function to ‘unterjubel’ (german for ‘inject’) hardware observables and allow correct gradient back-propagation.

Classes

CUBAIAFParams(tau_mem, tau_syn, …)

Parameters for IAF integration and backward path

CUBALIFParams(tau_mem, tau_syn, …)

Parameters for CUBA LIF integration and backward path

CUBALIParams(tau_mem, tau_syn, leak)

Parameters for CUBA LI integration and backward path

CalibratedCUBALIFParams(leak, reset, …)

Parameters for CUBA LIF integration and backward path

CalibratedCUBALIParams(leak, reset, …)

Parameters for CUBA LI integration and backward path

SuperSpike(*args, **kwargs)

Define Surrogate Gradient ‘SuperSpike’ (negative side of Fast Sigmoid) See: https://arxiv.org/abs/1705.11146

Functions

hxtorch.spiking.functional.batch_dropout(input: torch.Tensor, mask: torch.Tensor)torch.Tensor

Applies a dropout mask to a batch of inputs.

Parameters
  • input – The input tensor to apply dropout to.

  • mask – The dropout mask. Entires in the mask which are False will disable their corresponding entry in input.

Returns

The input tensor with dropout mask applied.

hxtorch.spiking.functional.cuba_iaf_integration(input: torch.Tensor, params: Union[hxtorch.spiking.functional.iaf.CalibratedCUBAIAFParams, hxtorch.spiking.functional.iaf.CUBAIAFParams], hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, torch.Tensor]

Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:

v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.threshold v^{t+1} = v_reset if z^{t+1} == 1

Assumes i^0, v^0 = 0., params.reset :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIFParams object holding neuron parameters. :return: Returns the spike trains in shape and membrane trace as a tuple.

Both tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_li_integration(input: torch.Tensor, params: Union[hxtorch.spiking.functional.li.CalibratedCUBALIParams, hxtorch.spiking.functional.li.CUBALIParams], hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)torch.Tensor

Leaky-integrate neuron integration for realization of readout neurons with exponential synapses. Integrates according to:

v^{t+1} = dt / au_{mem} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t

Assumes i^0, v^0 = 0. :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIParams object holding neuron parameters. :param dt: Integration step width

Returns

Returns the membrane trace in shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_lif_integration(input: torch.Tensor, params: Union[hxtorch.spiking.functional.lif.CalibratedCUBALIFParams, hxtorch.spiking.functional.lif.CUBALIFParams], hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, ]

Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:

i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t z^{t+1} = 1 if v^{t+1} > params.threshold v^{t+1} = params.reset if z^{t+1} == 1

Assumes i^0, v^0 = 0, v_leak :note: One dt synaptic delay between input and output

TODO: Issue 3992

Parameters
  • input – Input spikes in shape (batch, time, neurons).

  • params – LIFParams object holding neuron parameters.

  • dt – Step width of integration.

Returns

Returns the spike trains in shape and membrane trace as a tuple. Both tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_refractory_iaf_integration(input: torch.Tensor, params: NamedTuple, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, torch.Tensor]

Integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses and refractory period. Integrates according to:

v^{t+1} = dt / au_{men} * i^t + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1 or ref^t > 0 ref^{t+1} -= 1 ref^{t+1} = params.tau_ref if z^{t+1} == 1

Assumes i^0, v^0 = 0., v_reset :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIFParams object holding neuron parameters. :return: Returns the spike trains in shape and membrane trace as a tuple.

Both tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.cuba_refractory_lif_integration(input: torch.Tensor, params: Union[hxtorch.spiking.functional.lif.CalibratedCUBALIFParams, hxtorch.spiking.functional.lif.CUBALIFParams], hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06)Tuple[torch.Tensor, ]

Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses and refractory period.

Integrates according to:

i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / au_{men} * (v_l - v^t + i^{t+1}) + v^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1 or ref^{t+1} > 0 ref^{t+1} = params.tau_ref ref^{t+1} -= 1

Assumes i^0, v^0 = 0.

Parameters
  • input – Input spikes in shape (batch, time, neurons).

  • params – LIFParams object holding neuron parameters.

Returns

Returns the spike trains in shape and membrane trace as a tuple. Both tensors are of shape (batch, time, neurons).

hxtorch.spiking.functional.eventprop_neuron(input: torch.Tensor, params: NamedTuple, dt: float, hw_data: Optional[torch.Tensor])Tuple[torch.Tensor]
hxtorch.spiking.functional.eventprop_synapse(input: torch.Tensor, weight: torch.Tensor, _: Optional[torch.Tensor] = None)torch.Tensor
hxtorch.spiking.functional.input_neuron(input: torch.Tensor, hw_data: Optional[torch.Tensor] = None)torch.Tensor

Input neuron, forwards spikes without modification in non-hardware runs but injects loop-back recorded spikes if available.

Parameters
  • input – Input spike tensor.

  • hw_data – Loop-back spikes, if available.

Returns

Returns the input spike tensor.

hxtorch.spiking.functional.linear(input: torch.Tensor, weight: torch.nn.parameter.Parameter, bias: Optional[torch.nn.parameter.Parameter] = None)torch.Tensor

Wrap linear to allow signature inspection

hxtorch.spiking.functional.linear_sparse(input: torch.Tensor, weight: torch.nn.parameter.Parameter, connections: Optional[torch.Tensor] = None, bias: Optional[torch.nn.parameter.Parameter] = None)torch.Tensor

Wrap linear to allow signature inspection. Disable inactive connections in weight tensor.

Parameters
  • input – The input to be multiplied with the params tensor weight.

  • weight – The weight parameter tensor. This tensor is expected to be dense since pytorch, see issue: 4039.

  • bias – The bias of the linear operation.

  • connections – A dense boolean connection mask indicating active connections. If None, the weight tensor remains untouched.

hxtorch.spiking.functional.threshold(input: torch.Tensor, method: str, alpha: float)torch.Tensor

Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function.

Currently supported: ‘super_spike’.

Parameters

alpha – Parameter controlling the slope of the surrogate derivative in case of ‘superspike’.

Returns

Returns the tensor of the threshold function.