hxtorch.perceptron.nn.ExpandedConv1d
-
class
hxtorch.perceptron.nn.
ExpandedConv1d
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int]], stride: int = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', num_sends: Optional[int] = None, wait_between_events: int = 5, mock: bool = False, *, input_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, weight_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = <function clamp_weight_>, num_expansions: Optional[int] = None) Bases:
hxtorch.perceptron.nn.Conv1d
Unrolls the weight matrix for execution on hardware. This maximizes the use of the synapses array.
Caveat: Fixed-pattern noise cannot be individually compensated for during training, because the same weights are used at different locations!
-
__init__
(in_channels: int, out_channels: int, kernel_size: Union[int, Tuple[int]], stride: int = 1, padding: Union[int, Tuple[int, int]] = 0, dilation: Union[int, Tuple] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros', num_sends: Optional[int] = None, wait_between_events: int = 5, mock: bool = False, *, input_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, weight_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = <function clamp_weight_>, num_expansions: Optional[int] = None) - Parameters
in_channels – Number of channels in the input
out_channels – Number of channels produced by the convolution
kernel_size – Size of the convolving kernel
stride – Stride of the convolution
padding – Zero-padding added to both sides of the input
padding_mode – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’
dilation – Spacing between kernel elements
groups – Number of blocked connections from input channels to output channels
bias – If
True
, adds a learnable bias to the outputnum_sends – Number of sends of the input. Values greater than 1 result in higher output to the neurons and increases the s/n ratio. For
None
this is automatically adjusted during initialization.wait_between_events – Wait time between two successive vector inputs, in FPGA clock cycles. Shorter wait time can lead to saturation of the synaptic input.
mock – Enable mock mode.
input_transform – Function that receives the input and returns a tensor to be used as input to the chip.
weight_transform – Function that receives the weight and returns a tensor to be used as weight matrix on the chip.
num_expansions – Number of enrolled kernels in a single operation
Methods
__init__
(in_channels, out_channels, kernel_size)- param in_channels
Number of channels in the input
Set the extra representation of the module
Attributes
-
bias
: Optional[torch.Tensor]
-
dilation
: Tuple[int, …]
-
extra_repr
() → str Set the extra representation of the module
To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.
-
groups
: int
-
in_channels
: int
-
kernel_size
: Tuple[int, …]
-
out_channels
: int
-
output_padding
: Tuple[int, …]
-
padding
: Union[str, Tuple[int, …]]
-
padding_mode
: str
-
stride
: Tuple[int, …]
-
training
: bool
-
transposed
: bool
-
weight
: torch.Tensor
-