tf.contrib.nn.rank_sampled_softmax_loss( weights, biases, labels, inputs, num_sampled, num_resampled, num_classes, num_true, sampled_values, resampling_temperature, remove_accidental_hits, partition_strategy, name=None )
Computes softmax loss using rank-based adaptive resampling.
This has been shown to improve rank loss after training compared to
tf.nn.sampled_softmax_loss. For a description of the algorithm and some experimental results, please see: TAPAS: Two-pass Approximate Adaptive Sampling for Softmax.
Sampling follows two phases: In the first phase,
num_sampled classes are selected using
tf.nn.learned_unigram_candidate_sampler or supplied
sampled_values. The logits are calculated on those sampled classes. This phases is similar to
tf.nn.sampled_softmax_loss. In the second phase, the
num_resampled classes with highest predicted probability are kept. Probabilities are
LogSumExp(logits / resampling_temperature), where the sum is over
resampling_temperature parameter controls the "adaptiveness" of the resampling. At lower temperatures, resampling is more adaptive because it picks more candidates close to the predicted classes. A common strategy is to decrease the temperature as training proceeds.
tf.nn.sampled_softmax_loss for more documentation on sampling and for typical default values for some of the parameters.
This operation is for training only. It is generally an underestimate of the full softmax loss.
A common use case is to use this method for training, and calculate the full softmax loss for evaluation or inference. In this case, you must set
partition_strategy="div" for the two losses to be consistent, as in the following example:
if mode == "train": loss = rank_sampled_softmax_loss( weights=weights, biases=biases, labels=labels, inputs=inputs, ..., partition_strategy="div") elif mode == "eval": logits = tf.matmul(inputs, tf.transpose(weights)) logits = tf.nn.bias_add(logits, biases) labels_one_hot = tf.one_hot(labels, n_classes) loss = tf.nn.softmax_cross_entropy_with_logits( labels=labels_one_hot, logits=logits)
[num_classes, dim], or a list of
Tensorobjects whose concatenation along dimension 0 has shape [num_classes, dim]. The (possibly-sharded) class embeddings.
[num_classes]. The (possibly-sharded) class biases.
[batch_size, num_true]. The target classes. Note that this format differs from the
[batch_size, dim]. The forward activations of the input network.
int. The number of classes to randomly sample per batch.
int. The number of classes to select from the
num_sampledclasses using the adaptive resampling algorithm. Must be less than
int. The number of possible classes.
int. The number of target classes per training example.
sampled_values: A tuple of (
sampled_expected_count) returned by a
*_candidate_samplerfunction. If None, default to
resampling_temperature: A scalar
Tensorwith the temperature parameter for the adaptive resampling algorithm.
bool. Whether to remove "accidental hits" where a sampled class equals one of the target classes.
partition_strategy: A string specifying the partitioning strategy, relevant if
len(weights) > 1. Currently
"mod"are supported. See
tf.nn.embedding_lookupfor more details.
name: A name for the operation (optional).
batch_size 1-D tensor of per-example sampled softmax losses.
num_sampled <= num_resampled.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.