class torch.nn.Conv1d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T]], stride: Union[T, Tuple[T]] = 1, padding: Union[T, Tuple[T]] = 0, dilation: Union[T, Tuple[T]] = 1, groups: int = 1, bias: bool = True, padding_mode: str = 'zeros')
[source]
Applies a 1D convolution over an input signal composed of several input planes.
In the simplest case, the output value of the layer with input size $(N, C_{\text{in}}, L)$ and output $(N, C_{\text{out}}, L_{\text{out}})$ can be precisely described as:
where $\star$ is the valid cross-correlation operator, $N$ is a batch size, $C$ denotes a number of channels, $L$ is a length of signal sequence.
This module supports TensorFloat32.
stride
controls the stride for the cross-correlation, a single number or a one-element tuple.padding
controls the amount of implicit zero-paddings on both sides for padding
number of points.dilation
controls the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what dilation
does.groups
controls the connections between inputs and outputs. in_channels
and out_channels
must both be divisible by groups
. For example,
in_channels
, each input channel is convolved with its own set of filters, of size $\left\lfloor\frac{out\_channels}{in\_channels}\right\rfloor$ .Note
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
Note
When groups == in_channels
and out_channels == K * in_channels
, where K
is a positive integer, this operation is also termed in literature as depthwise convolution.
In other words, for an input of size $(N, C_{in}, L_{in})$ , a depthwise convolution with a depthwise multiplier K
, can be constructed by arguments $(C_\text{in}=C_{in}, C_\text{out}=C_{in} \times K, ..., \text{groups}=C_{in})$ .
Note
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic =
True
. Please see the notes on Reproducibility for background.
'zeros'
, 'reflect'
, 'replicate'
or 'circular'
. Default: 'zeros'
True
, adds a learnable bias to the output. Default: True
Output: $(N, C_{out}, L_{out})$ where
bias
is True
, then the values of these weights are sampled from $\mathcal{U}(-\sqrt{k}, \sqrt{k})$ where $k = \frac{groups}{C_\text{in} * \text{kernel\_size}}$
Examples:
>>> m = nn.Conv1d(16, 33, 3, stride=2) >>> input = torch.randn(20, 16, 50) >>> output = m(input)
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.7.0/generated/torch.nn.Conv1d.html