AttentionWrapperState
Defined in tensorflow/contrib/seq2seq/python/ops/attention_wrapper.py
.
namedtuple
storing the state of a AttentionWrapper
.
Contains:
cell_state
: The state of the wrapped RNNCell
at the previous time step.attention
: The attention emitted at the previous time step.time
: int32 scalar containing the current time step.alignments
: A single or tuple of Tensor
(s) containing the alignments emitted at the previous time step for each attention mechanism.alignment_history
: (if enabled) a single or tuple of TensorArray
(s) containing alignment matrices from all time steps for each attention mechanism. Call stack()
on each to convert to a Tensor
.attention_state
: A single or tuple of nested objects containing attention mechanism state for each attention mechanism. The objects may contain Tensors or TensorArrays.alignment_history
Alias for field number 4
alignments
Alias for field number 3
attention
Alias for field number 1
attention_state
Alias for field number 5
cell_state
Alias for field number 0
time
Alias for field number 2
__new__
__new__( _cls, cell_state, attention, time, alignments, alignment_history, attention_state )
Create new instance of AttentionWrapperState(cell_state, attention, time, alignments, alignment_history, attention_state)
clone
clone(**kwargs)
Clone this object, overriding components provided by kwargs.
The new state fields' shape must match original state fields' shape. This will be validated, and original fields' shape will be propagated to new fields.
Example:
initial_state = attention_wrapper.zero_state(dtype=..., batch_size=...) initial_state = initial_state.clone(cell_state=encoder_state)
**kwargs
: Any properties of the state object to replace in the returned AttentionWrapperState
.A new AttentionWrapperState
whose properties are the same as this one, except any overridden properties as provided in kwargs
.
© 2018 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/api_docs/python/tf/contrib/seq2seq/AttentionWrapperState