ElasticAverageOptimizer
Inherits From: Optimizer
Defined in tensorflow/contrib/opt/python/training/elastic_average_optimizer.py
.
Wrapper optimizer that implements the Elastic Average SGD algorithm. This is an async optimizer. During the training, Each worker will update the local variables and maintains its own local_step, which starts from 0 and is incremented by 1 after each update of local variables. Whenever the communication period divides the local step, the worker requests the current global center variables and then computed the elastic difference between global center variables and local variables. The elastic difference then be used to update both local variables and global variables.
__init__
__init__( opt, num_worker, ea_custom_getter, communication_period=10, moving_rate=None, rho=None, use_locking=True, name='ElasticAverageOptimizer' )
Construct a new gradient descent optimizer.
opt
: The actual optimizer that will be used to update local variables. Must be one of the Optimizer classes.num_worker
: The number of workersea_custom_getter
: The ElasticAverageCustomGettercommunication_period
: An int point value to controls the frequency of the communication between every worker and the ps.moving_rate
: A floating point value to control the elastic difference.rho
: the amount of exploration we allow ine the model. The default value is moving_rate/learning_rateuse_locking
: If True use locks for update operations.name
: Optional name prefix for the operations created when applying gradients. Defaults to "ElasticAverageOptimizer".apply_gradients
apply_gradients( grads_and_vars, global_step=None, name=None )
Apply gradients to global variables.
This is the second part of minimize()
. It returns an Operation
that applies gradients.
grads_and_vars
: List of (gradient, variable) pairs as returned by compute_gradients()
.global_step
: Optional Variable
to increment by one after the variables have been updated.name
: Optional name for the returned operation. Default to the name passed to the Optimizer
constructor.An Operation
that applies the specified gradients. If global_step
was not None, that operation also increments global_step
.
TypeError
: If grads_and_vars
is malformed.ValueError
: If none of the variables have gradients.compute_gradients
compute_gradients( loss, var_list=None, gate_gradients=optimizer.Optimizer.GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, grad_loss=None )
Compute gradients of loss
for the variables in var_list
.
Add rho*elastic_difference to loss to control the exploration This is the first part of minimize()
. It returns a list of (gradient, variable) pairs where "gradient" is the gradient for "variable". Note that "gradient" can be a Tensor
, an IndexedSlices
, or None
if there is no gradient for the given variable.
loss
: A Tensor containing the value to minimize.var_list
: Optional list or tuple of tf.Variable
to update to minimize loss
. Defaults to the list of variables collected in the graph under the key GraphKey.TRAINABLE_VARIABLES
.gate_gradients
: How to gate the computation of gradients. Can be GATE_NONE
, GATE_OP
, or GATE_GRAPH
.aggregation_method
: Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod
.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.grad_loss
: Optional. A Tensor
holding the gradient computed for loss
.A list of (gradient, variable) pairs. Variable is always present, but gradient can be None
.
TypeError
: If var_list
contains anything else than Variable
objects.ValueError
: If some arguments are invalid.get_init_op
get_init_op(task_index)
Returns the op to let all the local variables and local center variables equal to the global center variables before the training begins
get_name
get_name()
get_slot
get_slot( var, name )
Return a slot named name
created for var
by the Optimizer.
Some Optimizer
subclasses use additional variables. For example Momentum
and Adagrad
use variables to accumulate updates. This method gives access to these Variable
objects if for some reason you need them.
Use get_slot_names()
to get the list of slot names created by the Optimizer
.
var
: A variable passed to minimize()
or apply_gradients()
.name
: A string.The Variable
for the slot if it was created, None
otherwise.
get_slot_names
get_slot_names()
Return a list of the names of slots created by the Optimizer
.
See get_slot()
.
A list of strings.
make_session_run_hook
make_session_run_hook( is_chief, task_index )
Creates a hook to handle ElasticAverageOptimizerHook ops such as initialization.
minimize
minimize( loss, global_step=None, var_list=None, gate_gradients=GATE_OP, aggregation_method=None, colocate_gradients_with_ops=False, name=None, grad_loss=None )
Add operations to minimize loss
by updating var_list
.
This method simply combines calls compute_gradients()
and apply_gradients()
. If you want to process the gradient before applying them call compute_gradients()
and apply_gradients()
explicitly instead of using this function.
loss
: A Tensor
containing the value to minimize.global_step
: Optional Variable
to increment by one after the variables have been updated.var_list
: Optional list or tuple of Variable
objects to update to minimize loss
. Defaults to the list of variables collected in the graph under the key GraphKeys.TRAINABLE_VARIABLES
.gate_gradients
: How to gate the computation of gradients. Can be GATE_NONE
, GATE_OP
, or GATE_GRAPH
.aggregation_method
: Specifies the method used to combine gradient terms. Valid values are defined in the class AggregationMethod
.colocate_gradients_with_ops
: If True, try colocating gradients with the corresponding op.name
: Optional name for the returned operation.grad_loss
: Optional. A Tensor
holding the gradient computed for loss
.An Operation that updates the variables in var_list
. If global_step
was not None
, that operation also increments global_step
.
ValueError
: If some of the variables are not Variable
objects.When eager execution is enabled, loss
should be a Python function that takes elements of var_list
as arguments and computes the value to be minimized. If var_list
is None, loss
should take no arguments. Minimization (and gradient computation) is done with respect to the elements of var_list
if not None, else with respect to any trainable variables created during the execution of the loss
function. gate_gradients
, aggregation_method
, colocate_gradients_with_ops
and grad_loss
are ignored when eager execution is enabled.
variables
variables()
A list of variables which encode the current state of Optimizer
.
Includes slot variables and additional global variables created by the optimizer in the current default graph.
A list of variables.
BETA
GATE_GRAPH
GATE_NONE
GATE_OP
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/opt/ElasticAverageOptimizer