Gated Recurrent Unit cell.
tf.compat.v1.nn.rnn_cell.GRUCell( num_units, activation=None, reuse=None, kernel_initializer=None, bias_initializer=None, name=None, dtype=None, **kwargs )
Note that this cell is not optimized for performance. Please use
tf.contrib.cudnn_rnn.CudnnGRU for better performance on GPU, or
tf.contrib.rnn.GRUBlockCellV2 for better performance on CPU.
| ||int, The number of units in the GRU cell.|
| || Nonlinearity to use. Default: |
| || (optional) Python boolean describing whether to reuse variables in an existing scope. If not |
| ||(optional) The initializer to use for the weight and projection matrices.|
| ||(optional) The initializer to use for the bias.|
| ||String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.|
| || Default dtype of the layer (default of |
| || Dict, keyword named properties for common layer attributes, like |
| ||DEPRECATED FUNCTION|
| ||Integer or TensorShape: size of outputs produced by this cell.|
| || size(s) of state(s) used by this cell. |
It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.
get_initial_state( inputs=None, batch_size=None, dtype=None )
zero_state( batch_size, dtype )
Return zero-filled state tensor(s).
| ||int, float, or unit Tensor representing the batch size.|
| ||the data type to use for the state.|
| If |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.