Batches all input tensors nondeterministically.
tf.raw_ops.Batch( in_tensors, num_batch_threads, max_batch_size, batch_timeout_micros, grad_timeout_micros, max_enqueued_batches=10, allowed_batch_sizes=[], container='', shared_name='', batching_queue='', name=None )
When many instances of this Op are being run concurrently with the same container/shared_name in the same device, some will output zero-shaped Tensors and others will output Tensors of size up to max_batch_size.
All Tensors in in_tensors are batched together (so, for example, labels and features should be batched with a single instance of this operation.
Each invocation of batch emits an id
scalar which will be used to identify this particular invocation when doing unbatch or its gradient.
Each op which emits a non-empty batch will also emit a non-empty batch_index Tensor, which, is a [K, 3] matrix where each row contains the invocation's id, start, and length of elements of each set of Tensors present in batched_tensors.
Batched tensors are concatenated along the first dimension, and all tensors in in_tensors must have the first dimension of the same size.
in_tensors: The tensors to be batched. num_batch_threads: Number of scheduling threads for processing batches of work. Determines the number of batches processed in parallel. max_batch_size: Batch sizes will never be bigger than this. batch_timeout_micros: Maximum number of microseconds to wait before outputting an incomplete batch. allowed_batch_sizes: Optional list of allowed batch sizes. If left empty, does nothing. Otherwise, supplies a list of batch sizes, causing the op to pad batches up to one of those sizes. The entries must increase monotonically, and the final entry must equal max_batch_size. grad_timeout_micros: The timeout to use for the gradient. See Unbatch. batched_tensors: Either empty tensors or a batch of concatenated Tensors. batch_index: If out_tensors is non-empty, has information to invert it. container: Controls the scope of sharing of this batch. id: always contains a scalar with a unique ID for this invocation of Batch. shared_name: Concurrently running instances of batch in the same device with the same container and shared_name will batch their elements together. If left empty, the op name will be used as the shared name. T: the types of tensors to be batched.
Args | |
---|---|
in_tensors | A list of Tensor objects. |
num_batch_threads | An int . |
max_batch_size | An int . |
batch_timeout_micros | An int . |
grad_timeout_micros | An int . |
max_enqueued_batches | An optional int . Defaults to 10 . |
allowed_batch_sizes | An optional list of ints . Defaults to [] . |
container | An optional string . Defaults to "" . |
shared_name | An optional string . Defaults to "" . |
batching_queue | An optional string . Defaults to "" . |
name | A name for the operation (optional). |
Returns | |
---|---|
A tuple of Tensor objects (batched_tensors, batch_index, id). | |
batched_tensors | A list of Tensor objects. Has the same type as in_tensors . |
batch_index | A Tensor of type int64 . |
id | A Tensor of type int64 . |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/raw_ops/Batch