hxtorch.spiking.functional.refractory.Unterjubel

class hxtorch.spiking.functional.refractory.Unterjubel(*args, **kwargs)

Bases: torch.autograd.function.Function

Unterjubel hardware observables to allow correct gradient flow

__init__(*args, **kwargs)

Initialize self. See help(type(self)) for accurate signature.

Methods

backward(ctx, grad_output)

Backward the gradient.

forward(ctx, input, input_prime)

Returns input_prime instead of input to inject input_prime but direct the gradient to input.

Attributes

static backward(ctx: torch.Tensor, grad_output: torch.Tensor)Tuple[Optional[torch.Tensor], ]

Backward the gradient.

Parameters

grad_output – The backwarded gradient.

Returns

Returns simply the back-propagated gradient at first position.

static forward(ctx, input: torch.Tensor, input_prime: torch.Tensor)torch.Tensor

Returns input_prime instead of input to inject input_prime but direct the gradient to input.

Parameters
  • input – Input tensor.

  • input_prime – The returned tensor.

Returns

Returns the primed tensor. Thereby, this tensor is forwarded while the gradient is directed to to input.