hxtorch.spiking.functional.eventprop.EventPropSynapse

class hxtorch.spiking.functional.eventprop.EventPropSynapse(*args, **kwargs)

Bases: torch.autograd.function.Function

Synapse function for proper gradient transport when using EventPropNeuron.

__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

Methods

backward(ctx, grad_output)

Split gradient_output coming from EventPropNeuron and return Input gradient and weight gradient (adjoint at spike times): W (lambda_{v} - lambda_{i}) - tau_{s} lambda_{i} z

forward(ctx, input, weight[, _])

This should be used in combination with EventPropNeuron.

Attributes

static backward(ctx, grad_output: torch.Tensor)Tuple[Optional[torch.Tensor], Optional[torch.Tensor]]

Split gradient_output coming from EventPropNeuron and return Input gradient and weight gradient (adjoint at spike times):

W (lambda_{v} - lambda_{i}) - tau_{s} lambda_{i} z

Parameters

grad_output – Backpropagated gradient with shape (2, batch, time, out_neurons). grad_output[0] holds gradients to be propagated to weight, grad_output[1] holds gradients to be propagated to previous neuron layer.

Returns

Returns gradients with respect to input, weight and bias.

static forward(ctx, input: torch.Tensor, weight: torch.Tensor, _: Optional[torch.Tensor] = None)torch.Tensor

This should be used in combination with EventPropNeuron. Apply linear to input using weight and use a stacked output in order to be able to return correct terms according to EventProp to previous layer and weights.

Parameters
  • input – Input spikes in shape (batch, time, in_neurons).

  • weight – Weight in shape (out_neurons, in_neurons).

  • _ – Bias, which is unused here.

Returns

Returns stacked tensor holding weighted spikes and tensor with zeros but same shape.