View source on GitHub |
A one-machine strategy that puts all variables on a single device.
Inherits From: Strategy
tf.distribute.experimental.CentralStorageStrategy( compute_devices=None, parameter_device=None )
Variables are assigned to local CPU or the only GPU. If there is more than one GPU, compute operations (other than variable update operations) will be replicated across all GPUs.
strategy = tf.distribute.experimental.CentralStorageStrategy() # Create a dataset ds = tf.data.Dataset.range(5).batch(2) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): return val + 1 # Iterate over the distributed dataset for x in dist_dataset: # process dataset elements strategy.run(train_step, args=(x,))
Attributes | |
---|---|
cluster_resolver | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker Strategies that intend to have an associated Single-worker strategies usually do not have a The os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. For more information, please see |
extended | tf.distribute.StrategyExtended with additional methods. |
num_replicas_in_sync | Returns number of replicas over which gradients are aggregated. |
experimental_assign_to_logical_device
experimental_assign_to_logical_device( tensor, logical_device_id )
Adds annotation that tensor
will be assigned to a logical device.
Note: This API is only supported in TPUStrategy for now. This adds an annotation totensor
specifying that operations ontensor
will be invoked on logical core device idlogical_device_id
. When model parallelism is used, the default behavior is that all ops are placed on zero-th logical device.
# Initializing TPU system with 2 logical devices and 4 replicas. resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 1, 1, 2], num_replicas=4) strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment) iterator = iter(inputs) @tf.function() def step_fn(inputs): output = tf.add(inputs, inputs) # Add operation will be executed on logical device 0. output = strategy.experimental_assign_to_logical_device(output, 0) return output strategy.run(step_fn, args=(next(iterator),))
Args | |
---|---|
tensor | Input tensor to annotate. |
logical_device_id | Id of the logical core to which the tensor will be assigned. |
Raises | |
---|---|
ValueError | The logical device id presented is not consistent with total number of partitions specified by the device assignment. |
Returns | |
---|---|
Annotated tensor with idential value as tensor . |
experimental_distribute_dataset
experimental_distribute_dataset( dataset )
Distributes a tf.data.Dataset instance provided via dataset.
The returned dataset is a wrapped strategy dataset which creates a multidevice iterator under the hood. It prefetches the input data to the specified devices on the worker. The returned distributed dataset can be iterated over similar to how regular datasets can.
Note: Currently, the user cannot add any more transformations to a distributed dataset.
strategy = tf.distribute.CentralStorageStrategy() # with 1 CPU and 1 GPU dataset = tf.data.Dataset.range(10).batch(2) dist_dataset = strategy.experimental_distribute_dataset(dataset) for x in dist_dataset: print(x) # Prints PerReplica values [0, 1], [2, 3],...
Args: dataset: tf.data.Dataset
to be prefetched to device.
Returns | |
---|---|
A "distributed Dataset " that the caller can iterate over. |
experimental_distribute_datasets_from_function
experimental_distribute_datasets_from_function( dataset_fn )
Distributes tf.data.Dataset
instances created by calls to dataset_fn
.
dataset_fn
will be called once for each worker in the strategy. In this case, we only have one worker so dataset_fn
is called once. Each replica on this worker will then dequeue a batch of elements from this local dataset.
The dataset_fn
should take an tf.distribute.InputContext
instance where information about batching and input replication can be accessed.
def dataset_fn(input_context): batch_size = input_context.get_per_replica_batch_size(global_batch_size) d = tf.data.Dataset.from_tensors([[1.]]).repeat().batch(batch_size) return d.shard( input_context.num_input_pipelines, input_context.input_pipeline_id) inputs = strategy.experimental_distribute_datasets_from_function(dataset_fn) for batch in inputs: replica_results = strategy.run(replica_fn, args=(batch,))
Args | |
---|---|
dataset_fn | A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset . |
Returns | |
---|---|
A "distributed Dataset ", which the caller can iterate over like regular datasets. |
experimental_distribute_values_from_function
experimental_distribute_values_from_function( value_fn )
Generates tf.distribute.DistributedValues
from value_fn
.
This function is to generate tf.distribute.DistributedValues
to pass into run
, reduce
, or other methods that take distributed values when not using datasets.
Args | |
---|---|
value_fn | The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. |
Returns | |
---|---|
A tf.distribute.DistributedValues containing a value for each replica. |
strategy = tf.distribute.MirroredStrategy() def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>,)
strategy = tf.distribute.MirroredStrategy() array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0,)
strategy = tf.distribute.MirroredStrategy() def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (1,)
strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn)
experimental_local_results
experimental_local_results( value )
Returns the list of all local per-replica values contained in value
.
In CentralStorageStrategy
there is a single worker so the value returned will be all the values on that worker.
Args | |
---|---|
value | A value returned by run() , extended.call_for_each_replica() , or a variable created in scope . |
Returns | |
---|---|
A tuple of values contained in value . If value represents a single value, this returns (value,). |
experimental_make_numpy_dataset
experimental_make_numpy_dataset( numpy_input )
Makes a tf.data.Dataset
from a numpy array. (deprecated)
This avoids adding numpy_input
as a large constant in the graph, and copies the data to the machine or machines that will be processing the input.
Note that you will likely need to use experimental_distribute_dataset
with the returned dataset to further distribute it with the strategy.
strategy = tf.distribute.MirroredStrategy() numpy_input = np.ones([10], dtype=np.float32) dataset = strategy.experimental_make_numpy_dataset(numpy_input) dataset <TensorSliceDataset shapes: (), types: tf.float32> dataset = dataset.batch(2) dist_dataset = strategy.experimental_distribute_dataset(dataset)
Args | |
---|---|
numpy_input | a nest of NumPy input arrays that will be converted into a dataset. Note that the NumPy arrays are stacked, as that is normal tf.data.Dataset behavior. |
Returns | |
---|---|
A tf.data.Dataset representing numpy_input . |
experimental_replicate_to_logical_devices
experimental_replicate_to_logical_devices( tensor )
Adds annotation that tensor
will be replicated to all logical devices.
Note: This API is only supported in TPUStrategy for now. This adds an annotation to tensortensor
specifying that operations ontensor
will be invoked on all logical devices.
# Initializing TPU system with 2 logical devices and 4 replicas. resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 1, 1, 2], num_replicas=4) strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment) iterator = iter(inputs) @tf.function() def step_fn(inputs): images, labels = inputs images = strategy.experimental_split_to_logical_devices( inputs, [1, 2, 4, 1]) # model() function will be executed on 8 logical devices with `inputs` # split 2 * 4 ways. output = model(inputs) # For loss calculation, all logical devices share the same logits # and labels. labels = strategy.experimental_replicate_to_logical_devices(labels) output = strategy.experimental_replicate_to_logical_devices(output) loss = loss_fn(labels, output) return loss strategy.run(step_fn, args=(next(iterator),))
Args: tensor: Input tensor to annotate.
Returns | |
---|---|
Annotated tensor with idential value as tensor . |
experimental_split_to_logical_devices
experimental_split_to_logical_devices( tensor, partition_dimensions )
Adds annotation that tensor
will be split across logical devices.
Note: This API is only supported in TPUStrategy for now. This adds an annotation to tensortensor
specifying that operations ontensor
will be be split among multiple logical devices. Tensortensor
will be split across dimensions specified bypartition_dimensions
. The dimensions oftensor
must be divisible by corresponding value inpartition_dimensions
.
For example, for system with 8 logical devices, if tensor
is an image tensor with shape (batch_size, width, height, channel) and partition_dimensions
is [1, 2, 4, 1], then tensor
will be split 2 in width dimension and 4 way in height dimension and the split tensor values will be fed into 8 logical devices.
# Initializing TPU system with 8 logical devices and 1 replica. resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='') tf.config.experimental_connect_to_cluster(resolver) topology = tf.tpu.experimental.initialize_tpu_system(resolver) device_assignment = tf.tpu.experimental.DeviceAssignment.build( topology, computation_shape=[1, 2, 2, 2], num_replicas=1) strategy = tf.distribute.TPUStrategy( resolver, experimental_device_assignment=device_assignment) iterator = iter(inputs) @tf.function() def step_fn(inputs): inputs = strategy.experimental_split_to_logical_devices( inputs, [1, 2, 4, 1]) # model() function will be executed on 8 logical devices with `inputs` # split 2 * 4 ways. output = model(inputs) return output strategy.run(step_fn, args=(next(iterator),))
Args: tensor: Input tensor to annotate. partition_dimensions: An unnested list of integers with the size equal to rank of tensor
specifying how tensor
will be partitioned. The product of all elements in partition_dimensions
must be equal to the total number of logical devices per replica.
Raises | |
---|---|
ValueError | 1) If the size of partition_dimensions does not equal to rank of |
Returns | |
---|---|
Annotated tensor with idential value as tensor . |
reduce
reduce( reduce_op, value, axis )
Reduce value
across replicas.
Given a per-replica value returned by run
, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements. For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3]
will be on replica 0 and [4, 5, 6, 7]
will be on replica 1. By default, reduce
will just aggregate across replicas, returning [0+4, 1+5, 2+6, 3+7]
. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient). More often you will want to aggregate across the global batch, which you can get by specifying the batch dimension as the axis
, typically axis=0
. In this case it would return a scalar 0+1+2+3+4+5+6+7
.
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0
. If you specify tf.distribute.ReduceOp.MEAN
, using axis=0
will use the correct denominator of 6. Contrast this with computing reduce_mean
to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8
and others 1/4
.
strategy = tf.distribute.experimental.CentralStorageStrategy( compute_devices=['CPU:0', 'GPU:0'], parameter_device='CPU:0') ds = tf.data.Dataset.range(10) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(ds) with strategy.scope(): @tf.function def train_step(val): # pass through return val # Iterate over the distributed dataset for x in dist_dataset: result = strategy.run(train_step, args=(x,)) result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=None).numpy() # result: array([ 4, 6, 8, 10]) result = strategy.reduce(tf.distribute.ReduceOp.SUM, result, axis=0).numpy() # result: 28
Args | |
---|---|
reduce_op | A tf.distribute.ReduceOp value specifying how values should be combined. |
value | A "per replica" value, e.g. returned by run to be combined into a single tensor. |
axis | Specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension). |
Returns | |
---|---|
A Tensor . |
run
run( fn, args=(), kwargs=None, options=None )
Run fn
on each replica, with the given arguments.
In CentralStorageStrategy
, fn
is called on each of the compute replicas, with the provided "per replica" arguments specific to that device.
Args | |
---|---|
fn | The function to run. The output must be a tf.nest of Tensor s. |
args | (Optional) Positional arguments to fn . |
kwargs | (Optional) Keyword arguments to fn . |
options | (Optional) An instance of tf.distribute.RunOptions specifying the options to run fn . |
Returns | |
---|---|
Return value from running fn . |
scope
scope()
Context manager to make the strategy current and distribute variables.
This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy() # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy
is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy()
will now return this strategy. Outside this scope, it returns the default no-op strategy.tf.distribute.StrategyExtended
for an explanation on cross-replica and replica contexts.scope
is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy
, TPUStrategy
and MultiWorkerMiroredStrategy
create variables replicated on each replica, whereas ParameterServerStrategy
creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope
.MultiWorkerMiroredStrategy
, a default device scope of "/CPU:0" is entered on each worker.Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like kerasmodel.fit
. If you're not usingmodel.fit
, you need to usestrategy.run
API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside?
There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK).
strategy.scope
. This can be either by directly putting it in scope, or relying on another API like strategy.run
or model.fit
to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope
can also work seamlessly, without the user having to enter the scope.strategy.run
and strategy.reduce
) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself.tf.keras.Model
is created inside a strategy.scope
, we capture this information. When high level training frameworks methods such as model.compile
, model.fit
etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..)
is not impacted - only high level training framework APIs are. model.compile
, model.fit
, model.evaluate
, model.predict
and model.save
can all be called inside or outside the scope.tf.function
s that represent your training step ** Saving APIs such as tf.saved_model.save
. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way. ** Checkpoint saving. As mentioned above - checkpoint.restore
may sometimes need to be inside scope if it creates variables.Returns | |
---|---|
A context manager. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.3/api_docs/python/tf/distribute/experimental/CentralStorageStrategy