Retrieve Adagrad embedding parameters with debug support.
tf.raw_ops.RetrieveTPUEmbeddingAdagradParametersGradAccumDebug( num_shards, shard_id, table_id=-1, table_name='', config='', name=None )
An op that retrieves optimization parameters from embedding to host memory. Must be preceded by a ConfigureTPUEmbeddingHost op that sets up the correct embedding table configuration. For example, this op is used to retrieve updated parameters before saving a checkpoint.
Args | |
---|---|
num_shards | An int . |
shard_id | An int . |
table_id | An optional int . Defaults to -1 . |
table_name | An optional string . Defaults to "" . |
config | An optional string . Defaults to "" . |
name | A name for the operation (optional). |
Returns | |
---|---|
A tuple of Tensor objects (parameters, accumulators, gradient_accumulators). | |
parameters | A Tensor of type float32 . |
accumulators | A Tensor of type float32 . |
gradient_accumulators | A Tensor of type float32 . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/raw_ops/RetrieveTPUEmbeddingAdagradParametersGradAccumDebug