hxtorch.perceptron.nn.Linear

class hxtorch.perceptron.nn.Linear(in_features: numbers.Integral, out_features: numbers.Integral, bias: bool = True, num_sends: Optional[numbers.Integral] = None, wait_between_events: numbers.Integral = 5, mock: bool = False, *, avg: numbers.Integral = 1, input_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, weight_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = <function clamp_weight_>)

Bases: hxtorch.perceptron.nn.MACLayer, torch.nn.modules.linear.Linear

Applies a linear transformation to the incoming data on Hicann-X.

__init__(in_features: numbers.Integral, out_features: numbers.Integral, bias: bool = True, num_sends: Optional[numbers.Integral] = None, wait_between_events: numbers.Integral = 5, mock: bool = False, *, avg: numbers.Integral = 1, input_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, weight_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = <function clamp_weight_>)
Parameters
  • in_features – Size of each input sample

  • out_features – Size of each output sample

  • bias – If set to True, the layer will learn an additive bias.

  • num_sends – Number of sends of the input. Values greater than 1 result in higher output to the neurons and increases the s/n ratio. For None this is automatically adjusted during initialization.

  • wait_between_events – Wait time between two successive vector inputs, in FPGA clock cycles. Shorter wait time can lead to saturation of the synaptic input.

  • mock – Enable mock mode.

  • avg – Number of neurons to average over. This option is targeted at reducing statistical noise. Beware: We average over different fixed-pattern instances, but they are all configured at the same weight, so they are not trained individually. This could potentially have negative implications.

  • input_transform – Function that receives the input and returns a tensor to be used as input to the chip.

  • weight_transform – Function that receives the weight and returns a tensor to be used as weight matrix on the chip.

Methods

__init__(in_features, out_features[, bias, …])

param in_features

Size of each input sample

forward(input)

Defines the computation performed at every call.

Attributes

forward(input)

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

in_features: int
out_features: int
weight: torch.Tensor