tf.contrib.layers.bow_encoder(
ids,
vocab_size,
embed_dim,
sparse_lookup=True,
initializer=None,
regularizer=None,
trainable=True,
scope=None,
reuse=None
)
Defined in tensorflow/contrib/layers/python/layers/encoders.py.
Maps a sequence of symbols to a vector per example by averaging embeddings.
ids: [batch_size, doc_length] Tensor or SparseTensor of type int32 or int64 with symbol ids.vocab_size: Integer number of symbols in vocabulary.embed_dim: Integer number of dimensions for embedding matrix.sparse_lookup: bool, if True, converts ids to a SparseTensor and performs a sparse embedding lookup. This is usually faster, but not desirable if padding tokens should have an embedding. Empty rows are assigned a special embedding.initializer: An initializer for the embeddings, if None default for current scope is used.regularizer: Optional regularizer for the embeddings.trainable: If True also add variables to the graph collection GraphKeys.TRAINABLE_VARIABLES (see tf.Variable).scope: Optional string specifying the variable scope for the op, required if reuse=True.reuse: If True, variables inside the op will be reused.Encoding Tensor [batch_size, embed_dim] produced by averaging embeddings.
ValueError: If embed_dim or vocab_size are not specified.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/layers/bow_encoder