class torch.nn.ConvTranspose1d(in_channels: int, out_channels: int, kernel_size: Union[T, Tuple[T]], stride: Union[T, Tuple[T]] = 1, padding: Union[T, Tuple[T]] = 0, output_padding: Union[T, Tuple[T]] = 0, groups: int = 1, bias: bool = True, dilation: Union[T, Tuple[T]] = 1, padding_mode: str = 'zeros')
Applies a 1D transposed convolution operator over an input image composed of several input planes.
This module can be seen as the gradient of Conv1d with respect to its input. It is also known as a fractionally-strided convolution or a deconvolution (although it is not an actual deconvolution operation).
This module supports TensorFloat32.
stridecontrols the stride for the cross-correlation.
paddingcontrols the amount of implicit zero-paddings on both sides for
dilation * (kernel_size - 1) - paddingnumber of points. See note below for details.
output_paddingcontrols the additional size added to one side of the output shape. See note below for details.
dilationcontrols the spacing between the kernel points; also known as the à trous algorithm. It is harder to describe, but this link has a nice visualization of what
groups controls the connections between inputs and outputs.
out_channels must both be divisible by
groups. For example,
in_channels, each input channel is convolved with its own set of filters (of size ).
Depending of the size of your kernel, several (of the last) columns of the input might be lost, because it is a valid cross-correlation, and not a full cross-correlation. It is up to the user to add proper padding.
padding argument effectively adds
dilation * (kernel_size - 1) - padding amount of zero padding to both sizes of the input. This is set so that when a
Conv1d and a
ConvTranspose1d are initialized with same parameters, they are inverses of each other in regard to the input and output shapes. However, when
stride > 1,
Conv1d maps multiple input shapes to the same output shape.
output_padding is provided to resolve this ambiguity by effectively increasing the calculated output shape on one side. Note that
output_padding is only used to find output shape, but does not actually add zero-padding to output.
In some circumstances when using the CUDA backend with CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting
True. Please see the notes on Reproducibility for background.
dilation * (kernel_size - 1) - paddingzero-padding will be added to both sides of the input. Default: 0
True, adds a learnable bias to the output. Default:
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.