tf.contrib.layers.embed_sequence(
ids,
vocab_size=None,
embed_dim=None,
unique=False,
initializer=None,
regularizer=None,
trainable=True,
scope=None,
reuse=None
)
Defined in tensorflow/contrib/layers/python/layers/encoders.py.
See the guide: Layers (contrib) > Higher level ops for building neural network layers
Maps a sequence of symbols to a sequence of embeddings.
Typical use case would be reusing embeddings between an encoder and decoder.
ids: [batch_size, doc_length] Tensor of type int32 or int64 with symbol ids.vocab_size: Integer number of symbols in vocabulary.embed_dim: Integer number of dimensions for embedding matrix.unique: If True, will first compute the unique set of indices, and then lookup each embedding once, repeating them in the output as needed.initializer: An initializer for the embeddings, if None default for current scope is used.regularizer: Optional regularizer for the embeddings.trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).scope: Optional string specifying the variable scope for the op, required if reuse=True.reuse: If True, variables inside the op will be reused.Tensor of [batch_size, doc_length, embed_dim] with embedded sequences.
ValueError: if embed_dim or vocab_size are not specified when reuse is None or False.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/layers/embed_sequence