torch.nn.init.calculate_gain(nonlinearity, param=None)
[source]
Return the recommended gain value for the given nonlinearity function. The values are as follows:
nonlinearity | gain |
---|---|
Linear / Identity |
|
Conv{1,2,3}D |
|
Sigmoid |
|
Tanh |
|
ReLU |
|
Leaky Relu |
|
nn.functional
name)>>> gain = nn.init.calculate_gain('leaky_relu', 0.2) # leaky_relu with negative_slope=0.2
torch.nn.init.uniform_(tensor, a=0.0, b=1.0)
[source]
Fills the input Tensor with values drawn from the uniform distribution .
torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.uniform_(w)
torch.nn.init.normal_(tensor, mean=0.0, std=1.0)
[source]
Fills the input Tensor with values drawn from the normal distribution .
torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.normal_(w)
torch.nn.init.constant_(tensor, val)
[source]
Fills the input Tensor with the value .
torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.constant_(w, 0.3)
torch.nn.init.ones_(tensor)
[source]
Fills the input Tensor with the scalar value 1
.
tensor – an n-dimensional torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.ones_(w)
torch.nn.init.zeros_(tensor)
[source]
Fills the input Tensor with the scalar value 0
.
tensor – an n-dimensional torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.zeros_(w)
torch.nn.init.eye_(tensor)
[source]
Fills the 2-dimensional input Tensor
with the identity matrix. Preserves the identity of the inputs in Linear
layers, where as many inputs are preserved as possible.
tensor – a 2-dimensional torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.eye_(w)
torch.nn.init.dirac_(tensor, groups=1)
[source]
Fills the {3, 4, 5}-dimensional input Tensor
with the Dirac delta function. Preserves the identity of the inputs in Convolutional
layers, where as many input channels are preserved as possible. In case of groups>1, each group of channels preserves identity
torch.Tensor
>>> w = torch.empty(3, 16, 5, 5) >>> nn.init.dirac_(w) >>> w = torch.empty(3, 24, 5, 5) >>> nn.init.dirac_(w, 3)
torch.nn.init.xavier_uniform_(tensor, gain=1.0)
[source]
Fills the input Tensor
with values according to the method described in Understanding the difficulty of training deep feedforward neural networks
- Glorot, X. & Bengio, Y. (2010), using a uniform distribution. The resulting tensor will have values sampled from where
Also known as Glorot initialization.
torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.xavier_uniform_(w, gain=nn.init.calculate_gain('relu'))
torch.nn.init.xavier_normal_(tensor, gain=1.0)
[source]
Fills the input Tensor
with values according to the method described in Understanding the difficulty of training deep feedforward neural networks
- Glorot, X. & Bengio, Y. (2010), using a normal distribution. The resulting tensor will have values sampled from where
Also known as Glorot initialization.
torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.xavier_normal_(w)
torch.nn.init.kaiming_uniform_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
[source]
Fills the input Tensor
with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification
- He, K. et al. (2015), using a uniform distribution. The resulting tensor will have values sampled from where
Also known as He initialization.
torch.Tensor
'leaky_relu'
)'fan_in'
(default) or 'fan_out'
. Choosing 'fan_in'
preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out'
preserves the magnitudes in the backwards pass.nn.functional
name), recommended to use only with 'relu'
or 'leaky_relu'
(default).>>> w = torch.empty(3, 5) >>> nn.init.kaiming_uniform_(w, mode='fan_in', nonlinearity='relu')
torch.nn.init.kaiming_normal_(tensor, a=0, mode='fan_in', nonlinearity='leaky_relu')
[source]
Fills the input Tensor
with values according to the method described in Delving deep into rectifiers: Surpassing human-level performance on ImageNet classification
- He, K. et al. (2015), using a normal distribution. The resulting tensor will have values sampled from where
Also known as He initialization.
torch.Tensor
'leaky_relu'
)'fan_in'
(default) or 'fan_out'
. Choosing 'fan_in'
preserves the magnitude of the variance of the weights in the forward pass. Choosing 'fan_out'
preserves the magnitudes in the backwards pass.nn.functional
name), recommended to use only with 'relu'
or 'leaky_relu'
(default).>>> w = torch.empty(3, 5) >>> nn.init.kaiming_normal_(w, mode='fan_out', nonlinearity='relu')
torch.nn.init.orthogonal_(tensor, gain=1)
[source]
Fills the input Tensor
with a (semi) orthogonal matrix, as described in Exact solutions to the nonlinear dynamics of learning in deep linear neural networks
- Saxe, A. et al. (2013). The input tensor must have at least 2 dimensions, and for tensors with more than 2 dimensions the trailing dimensions are flattened.
torch.Tensor
, where
>>> w = torch.empty(3, 5) >>> nn.init.orthogonal_(w)
torch.nn.init.sparse_(tensor, sparsity, std=0.01)
[source]
Fills the 2D input Tensor
as a sparse matrix, where the non-zero elements will be drawn from the normal distribution , as described in Deep learning via Hessian-free optimization
- Martens, J. (2010).
torch.Tensor
>>> w = torch.empty(3, 5) >>> nn.init.sparse_(w, sparsity=0.1)
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.7.0/nn.init.html