Adamax
Inherits From: Optimizer
Defined in tensorflow/python/keras/_impl/keras/optimizers.py
.
Adamax optimizer from Adam paper's Section 7.
It is a variant of Adam based on the infinity norm. Default parameters follow those provided in the paper.
lr
: float >= 0. Learning rate. beta_1/beta_2: floats, 0 < beta < 1. Generally close to 1.epsilon
: float >= 0. Fuzz factor. If None
, defaults to K.epsilon()
.decay
: float >= 0. Learning rate decay over each update.__init__
__init__( lr=0.002, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, **kwargs )
Initialize self. See help(type(self)) for accurate signature.
from_config
from_config( cls, config )
get_config
get_config()
get_gradients
get_gradients( loss, params )
Returns gradients of loss
with respect to params
.
loss
: Loss tensor.params
: List of variables.List of gradient tensors.
ValueError
: In case any gradient cannot be computed (e.g. if gradient function not implemented).get_updates
get_updates( loss, params )
get_weights
get_weights()
Returns the current value of the weights of the optimizer.
A list of numpy arrays.
set_weights
set_weights(weights)
Sets the weights of the optimizer, from Numpy arrays.
Should only be called after computing the gradients (otherwise the optimizer has no weights).
weights
: a list of Numpy arrays. The number of arrays and their shape must match number of the dimensions of the weights of the optimizer (i.e. it should match the output of get_weights
).ValueError
: in case of incompatible weight shapes.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/keras/optimizers/Adamax