hxtorch.perceptron.nn.Conv2d
-
class
hxtorch.perceptron.nn.
Conv2d
(in_channels: numbers.Integral, out_channels: numbers.Integral, kernel_size: Union[numbers.Integral, Tuple[numbers.Integral, numbers.Integral]], stride: numbers.Integral = 1, padding: Union[numbers.Integral, Tuple[numbers.Integral, numbers.Integral]] = 0, dilation: numbers.Integral = 1, groups: numbers.Integral = 1, bias: bool = True, padding_mode: str = 'zeros', num_sends: Optional[numbers.Integral] = None, wait_between_events: numbers.Integral = 5, mock: bool = False, *, input_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, weight_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = <function clamp_weight_>) Bases:
hxtorch.perceptron.nn.ConvNd
,torch.nn.modules.conv.Conv2d
Applies a 2D convolution over an input image composed of several input planes.
-
__init__
(in_channels: numbers.Integral, out_channels: numbers.Integral, kernel_size: Union[numbers.Integral, Tuple[numbers.Integral, numbers.Integral]], stride: numbers.Integral = 1, padding: Union[numbers.Integral, Tuple[numbers.Integral, numbers.Integral]] = 0, dilation: numbers.Integral = 1, groups: numbers.Integral = 1, bias: bool = True, padding_mode: str = 'zeros', num_sends: Optional[numbers.Integral] = None, wait_between_events: numbers.Integral = 5, mock: bool = False, *, input_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = None, weight_transform: Optional[Callable[[torch.Tensor], torch.Tensor]] = <function clamp_weight_>) - Parameters
in_channels – Number of channels in the input
out_channels – Number of channels produced by the convolution
kernel_size – Size of the convolving kernel
stride – Stride of the convolution
padding – Zero-padding added to both sides of the input
padding_mode – ‘zeros’, ‘reflect’, ‘replicate’ or ‘circular’
dilation – Spacing between kernel elements
groups – Number of blocked connections from input channels to output channels
bias – If
True
, adds a learnable bias to the outputnum_sends – Number of sends of the input. Values greater than 1 result in higher output to the neurons and increases the s/n ratio. For
None
this is automatically adjusted during initialization.mock – Enable mock mode.
wait_between_events – Wait time between two successive vector inputs, in FPGA clock cycles. Shorter wait time can lead to saturation of the synaptic input.
input_transform – Function that receives the input and returns a tensor to be used as input to the chip.
weight_transform – Function that receives the weight and returns a tensor to be used as weight matrix on the chip.
Methods
__init__
(in_channels, out_channels, kernel_size)- param in_channels
Number of channels in the input
Attributes
-
bias
: Optional[torch.Tensor]
-
dilation
: Tuple[int, …]
-
groups
: int
-
in_channels
: int
-
kernel_size
: Tuple[int, …]
-
out_channels
: int
-
output_padding
: Tuple[int, …]
-
padding
: Union[str, Tuple[int, …]]
-
padding_mode
: str
-
stride
: Tuple[int, …]
-
transposed
: bool
-
weight
: torch.Tensor
-