tf.nn.batch_normalization( x, mean, variance, offset, scale, variance_epsilon, name=None )
Defined in tensorflow/python/ops/nn_impl.py
.
See the guide: Neural Network > Normalization
Batch normalization.
As described in http://arxiv.org/abs/1502.03167. Normalizes a tensor by mean
and variance
, and applies (optionally) a scale
\(\gamma\) to it, as well as an offset
\(\beta\):
\(\frac{\gamma(x-\mu)}{\sigma}+\beta\)
mean
, variance
, offset
and scale
are all expected to be of one of two shapes:
x
, with identical sizes as x
for the dimensions that are not normalized over (the 'depth' dimension(s)), and dimension 1 for the others which are being normalized over. mean
and variance
in this case would typically be the outputs of tf.nn.moments(..., keep_dims=True)
during training, or running averages thereof during inference.x
, they may be one dimensional tensors of the same size as the 'depth' dimension. This is the case for example for the common [batch, depth]
layout of fully-connected layers, and [batch, height, width, depth]
for convolutions. mean
and variance
in this case would typically be the outputs of tf.nn.moments(..., keep_dims=False)
during training, or running averages thereof during inference.x
: Input Tensor
of arbitrary dimensionality.mean
: A mean Tensor
.variance
: A variance Tensor
.offset
: An offset Tensor
, often denoted \(\beta\) in equations, or None. If present, will be added to the normalized tensor.scale
: A scale Tensor
, often denoted \(\gamma\) in equations, or None
. If present, the scale is applied to the normalized tensor.variance_epsilon
: A small float number to avoid dividing by 0.name
: A name for this operation (optional).the normalized, scaled, offset tensor.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/nn/batch_normalization