class torch.nn.PReLU(num_parameters: int = 1, init: float = 0.25)
Applies the element-wise function:
Here is a learnable parameter. When called without arguments,
nn.PReLU() uses a single parameter across all input channels. If called with
nn.PReLU(nChannels), a separate is used for each input channel.
weight decay should not be used when learning for good performance.
Channel dim is the 2nd dim of input. When input has dims < 2, then there is no channel dim and the number of channels = 1.
*means, any number of additional dimensions
~PReLU.weight (Tensor) – the learnable weights of shape (
>>> m = nn.PReLU() >>> input = torch.randn(2) >>> output = m(input)
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.