Creates a recurrent neural network specified by RNNCell
tf.compat.v1.nn.static_rnn( cell, inputs, initial_state=None, dtype=None, sequence_length=None, scope=None )
The simplest form of RNN network generated is:
state = cell.zero_state(...) outputs =  for input_ in inputs: output, state = cell(input_, state) outputs.append(output) return (outputs, state)
However, a few other options are available:
An initial state can be provided. If the sequence_length vector is provided, dynamic calculation is performed. This method of calculation does not compute the RNN steps past the maximum sequence length of the minibatch (thus saving computational time), and properly propagates the state at an example's sequence length to the final state output.
The dynamic calculation performed is, at time
t for batch row
(output, state)(b, t) = (t >= sequence_length(b)) ? (zeros(cell.output_size), states(b, sequence_length(b) - 1)) : cell(input(b, t), state(b, t - 1))
| ||An instance of RNNCell.|
| || A length T list of inputs, each a |
| || (optional) An initial state for the RNN. If |
| ||(optional) The data type for the initial state and expected output. Required if initial_state is not provided or RNN state has a heterogeneous dtype.|
| || Specifies the length of each sequence in inputs. An int32 or int64 vector (tensor) size |
| ||VariableScope for the created subgraph; defaults to "rnn".|
| A pair (outputs, state) where: |
| || If |
| || If |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.