Looks up embeddings for the given
ids from a list of tensors.
tf.compat.v1.nn.embedding_lookup( params, ids, partition_strategy='mod', name=None, validate_indices=True, max_norm=None )
This function is used to perform parallel lookups on the list of tensors in
params. It is a generalization of
params is interpreted as a partitioning of a large embedding tensor.
params may be a
PartitionedVariable as returned by using
tf.compat.v1.get_variable() with a partitioner.
len(params) > 1, each element
ids is partitioned between the elements of
params according to the
partition_strategy. In all strategies, if the id space does not evenly divide the number of partitions, each of the first
(max_id + 1) % len(params) partitions will be assigned one more id.
"mod", we assign each id to partition
p = id % len(params). For instance, 13 ids are split across 5 partitions as:
[[0, 5, 10], [1, 6, 11], [2, 7, 12], [3, 8], [4, 9]]
"div", we assign ids to partitions in a contiguous manner. In this case, 13 ids are split across 5 partitions as:
[[0, 1, 2], [3, 4, 5], [6, 7, 8], [9, 10], [11, 12]]
If the input ids are ragged tensors, partition variables are not supported and the partition strategy and the max_norm are ignored. The results of the lookup are concatenated into a dense tensor. The returned tensor has shape
shape(ids) + shape(params)[1:].
| || A single tensor representing the complete embedding tensor, or a list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, a |
| || A |
| || A string specifying the partitioning strategy, relevant if |
| ||A name for the operation (optional).|
| || DEPRECATED. If this operation is assigned to CPU, values in |
| || If not |
| A |
| || If |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.