View source on GitHub |
A deprecated dtype policy for a Keras layer.
Inherits From: Policy
tf.keras.mixed_precision.experimental.Policy( name, loss_scale='auto' )
The difference between this class and the non-experimental class is that this class has a loss_scale
field and the non-experimental class does not. The loss scale is only used by tf.keras.Model.compile
, which automatically wraps the optimizer with a LossScaleOptimizer
if the optimzier is not already a LossScaleOptimizer
. For the non-experimental Policy class, Model.compile
instead wraps the optimizer with a LossScaleOptimizer
if Policy.name
is "mixed_float16".
When deserializing objects with an experimental policy using functions like tf.keras.utils.deserialize_keras_object
, the policy will be deserialized as the non-experimental tf.keras.mixed_precision.Policy
, and the loss scale will silently be dropped. This is so that SavedModels that are generated with an expeirmental policy can be restored after the experimental policy is removed.
Args | |
---|---|
name | A string. Can be one of the following values:
|
loss_scale | A tf.compat.v1.mixed_precision.LossScale , an int (which uses a FixedLossScale ), the string "dynamic" (which uses a DynamicLossScale ), or None (which uses no loss scale). Defaults to "auto" . In the "auto" case: 1) if name is "mixed_float16" , then use loss_scale="dynamic" . 2) otherwise, do not use a loss scale. Only tf.keras.Model s, not layers, use the loss scale, and it is only used during Model.fit , Model.train_on_batch , and other similar methods. |
Attributes | |
---|---|
compute_dtype | The compute dtype of this policy. This is the dtype layers will do their computations in. Typically layers output tensors with the compute dtype as well. Note that even if the compute dtype is float16 or bfloat16, hardware devices may not do individual adds, multiplies, and other fundamental operations in float16 or bfloat16, but instead may do some of them in float32 for numeric stability. The compute dtype is the dtype of the inputs and outputs of the TensorFlow ops that the layer executes. Internally, many TensorFlow ops will do certain internal calculations in float32 or some other device-internal intermediate format with higher precision than float16/bfloat16, to increase numeric stability. For example, a |
loss_scale | Returns the loss scale of this Policy. |
name | Returns the name of this policy. |
variable_dtype | The variable dtype of this policy. This is the dtype layers will create their variables in, unless a layer explicitly chooses a different dtype. If this is different than Variable regularizers are run in the variable dtype, not the compute dtype. |
from_config
@classmethod from_config( config, custom_objects=None )
get_config
get_config()
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/keras/mixed_precision/experimental/Policy