hxtorch.spiking.functional
Modules
Custom BatchDropout function |
|
Integrate and fire neurons |
|
Leaky-integrate neurons |
|
Leaky-integrate and fire neurons |
|
|
Wrap linear to allow signature inspection |
Refractory update for neurons with refractory behaviour |
|
Define different input spike sources |
|
Surrograte gradient for SuperSpike. |
|
|
Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function. Currently supported: ‘super_spike’. :param alpha: Parameter controlling the slope of the surrogate derivative in case of ‘superspike’. :return: Returns the tensor of the threshold function. |
Autograd function to ‘unterjubel’ (german for ‘inject’) hardware observables and allow correct gradient back-propagation. |
Classes
|
Parameters for IAF integration and backward path |
|
Parameters for CUBA LIF integration and backward path |
|
Parameters for CUBA LI integration and backward path |
|
Define Surrogate Gradient ‘SuperSpike’ (negative side of Fast Sigmoid) See: https://arxiv.org/abs/1705.11146 |
Functions
-
hxtorch.spiking.functional.
batch_dropout
(input: torch.Tensor, mask: torch.Tensor) → torch.Tensor Applies a dropout mask to a batch of inputs.
- Parameters
input – The input tensor to apply dropout to.
mask – The dropout mask. Entires in the mask which are False will disable their corresponding entry in input.
- Returns
The input tensor with dropout mask applied.
-
hxtorch.spiking.functional.
cuba_iaf_integration
(input: torch.Tensor, params: NamedTuple, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06) → Tuple[torch.Tensor, torch.Tensor] Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:
v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = v_reset if z^{t+1} == 1
Assumes i^0, v^0 = 0., v_reset :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIFParams object holding neuron parameters. :return: Returns the spike trains in shape and membrane trace as a tuple.
Both tensors are of shape (batch, time, neurons).
-
hxtorch.spiking.functional.
cuba_li_integration
(input: torch.Tensor, params: hxtorch.spiking.functional.li.CUBALIParams, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06) → torch.Tensor Leaky-integrate neuron integration for realization of readout neurons with exponential synapses. Integrates according to:
v^{t+1} = dt / au_{mem} * (v_l - v^t + i^t) + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t
Assumes i^0, v^0 = 0. :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIParams object holding neuron parameters. :param dt: Integration step width
- Returns
Returns the membrane trace in shape (batch, time, neurons).
-
hxtorch.spiking.functional.
cuba_lif_integration
(input: torch.Tensor, params: hxtorch.spiking.functional.lif.CUBALIFParams, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06) → Tuple[torch.Tensor, torch.Tensor] Leaky-integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses. Integrates according to:
i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t v^{t+1} = dt / au_{men} * (v_l - v^t + i^t) + v^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1
Assumes i^0, v^0 = 0, v_leak :note: One dt synaptic delay between input and output
TODO: Issue 3992
- Parameters
input – Input spikes in shape (batch, time, neurons).
params – LIFParams object holding neuron parameters.
dt – Step width of integration.
- Returns
Returns the spike trains in shape and membrane trace as a tuple. Both tensors are of shape (batch, time, neurons).
-
hxtorch.spiking.functional.
cuba_refractory_iaf_integration
(input: torch.Tensor, params: NamedTuple, hw_data: Optional[torch.Tensor] = None, dt: float = 1e-06) → Tuple[torch.Tensor, torch.Tensor] Integrate and fire neuron integration for realization of simple spiking neurons with exponential synapses and refractory period. Integrates according to:
v^{t+1} = dt / au_{men} * i^t + v^t i^{t+1} = i^t * (1 - dt / au_{syn}) + x^t z^{t+1} = 1 if v^{t+1} > params.v_th v^{t+1} = params.v_reset if z^{t+1} == 1 or ref^t > 0 ref^{t+1} -= 1 ref^{t+1} = params.tau_ref if z^{t+1} == 1
Assumes i^0, v^0 = 0., v_reset :note: One dt synaptic delay between input and output :param input: Input spikes in shape (batch, time, neurons). :param params: LIFParams object holding neuron parameters. :return: Returns the spike trains in shape and membrane trace as a tuple.
Both tensors are of shape (batch, time, neurons).
-
hxtorch.spiking.functional.
input_neuron
(input: torch.Tensor, hw_data: Optional[torch.Tensor] = None) → torch.Tensor Input neuron, forwards spikes without modification in non-hardware runs but injects loop-back recorded spikes if available.
- Parameters
input – Input spike tensor.
hw_data – Loop-back spikes, if available.
- Returns
Returns the input spike tensor.
-
hxtorch.spiking.functional.
linear
(input: torch.Tensor, weight: torch.nn.parameter.Parameter, bias: Optional[torch.nn.parameter.Parameter] = None) → torch.Tensor Wrap linear to allow signature inspection
-
hxtorch.spiking.functional.
linear_sparse
(input: torch.Tensor, weight: torch.nn.parameter.Parameter, connections: Optional[torch.Tensor] = None, bias: Optional[torch.nn.parameter.Parameter] = None) → torch.Tensor Wrap linear to allow signature inspection. Disable inactive connections in weight tensor.
- Parameters
input – The input to be multiplied with the params tensor weight.
weight – The weight parameter tensor. This tensor is expected to be dense since pytorch, see issue: 4039.
bias – The bias of the linear operation.
connections – A dense boolean connection mask indicating active connections. If None, the weight tensor remains untouched.
-
hxtorch.spiking.functional.
threshold
(input: torch.Tensor, method: str, alpha: float) → torch.Tensor Selection of the used threshold function. :param input: Input tensor to threshold function. :param method: The string indicator of the the threshold function.
Currently supported: ‘super_spike’.
- Parameters
alpha – Parameter controlling the slope of the surrogate derivative in case of ‘superspike’.
- Returns
Returns the tensor of the threshold function.