DEPRECATED: Please use
tf.compat.v1.nn.rnn_cell.BasicLSTMCell( num_units, forget_bias=1.0, state_is_tuple=True, activation=None, reuse=None, name=None, dtype=None, **kwargs )
Basic LSTM recurrent network cell.
The implementation is based on
We add forget_bias (default: 1) to the biases of the forget gate in order to reduce the scale of forgetting in the beginning of the training.
It does not allow cell clipping, a projection layer, and does not use peep-hole connections: it is the basic baseline.
For advanced models, please use the full
tf.compat.v1.nn.rnn_cell.LSTMCell that follows.
Note that this cell is not optimized for performance. Please use
tf.contrib.cudnn_rnn.CudnnLSTM for better performance on GPU, or
tf.contrib.rnn.LSTMBlockFusedCell for better performance on CPU.
| ||int, The number of units in the LSTM cell.|
| || float, The bias added to forget gates (see above). Must set to |
| || If True, accepted and returned states are 2-tuples of the |
| || Activation function of the inner states. Default: |
| || (optional) Python boolean describing whether to reuse variables in an existing scope. If not |
| ||String, the name of the layer. Layers with the same name will share weights, but to avoid mistakes we require reuse=True in such cases.|
| || Default dtype of the layer (default of |
| || Dict, keyword named properties for common layer attributes, like |
| ||DEPRECATED FUNCTION|
| ||Integer or TensorShape: size of outputs produced by this cell.|
| || size(s) of state(s) used by this cell. |
It can be represented by an Integer, a TensorShape or a tuple of Integers or TensorShapes.
get_initial_state( inputs=None, batch_size=None, dtype=None )
zero_state( batch_size, dtype )
Return zero-filled state tensor(s).
| ||int, float, or unit Tensor representing the batch size.|
| ||the data type to use for the state.|
| If |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.