Applies softmax to a batched N-D
The inputs represent an N-D SparseTensor with logical shape
[..., B, C] (where
N >= 2), and with indices sorted in the canonical lexicographic order.
This op is equivalent to applying the normal
tf.nn.softmax() to each innermost logical submatrix with shape
[B, C], but with the catch that the implicitly zero elements do not participate. Specifically, the algorithm is equivalent to the following:
tf.nn.softmax() to a densified view of each innermost submatrix with shape
[B, C], along the size-C dimension; (2) Masks out the original implicitly-zero locations; (3) Renormalizes the remaining elements.
SparseTensor result has exactly the same non-zero indices and shape.
NNZ x Rmatrix with the indices of non-empty values in a SparseTensor, in canonical ordering.
NNZnon-empty values corresponding to
Output: 1-D. The
NNZvalues for the result
|Constructors and Destructors|
| || |
SparseSoftmax( const ::tensorflow::Scope & scope, ::tensorflow::Input sp_indices, ::tensorflow::Input sp_values, ::tensorflow::Input sp_shape )
::tensorflow::Node * node() const
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.