View source on GitHub |
An multi-worker tf.distribute strategy with parameter servers.
Inherits From: Strategy
tf.distribute.experimental.ParameterServerStrategy( cluster_resolver, variable_partitioner=None )
Parameter server training is a common data-parallel method to scale up a machine learning model on multiple machines. A parameter server training cluster consists of workers and parameter servers. Variables are created on parameter servers and they are read and updated by workers in each step. By default, workers read and update these variables independently without synchronizing with each other. Under this configuration, it is known as asynchronous training.
In TensorFlow 2, we recommend a central coordiantion-based architecture for parameter server training, where workers and parameter servers run a tf.distribute.Server
and there is another task that creates resources on workers and parameter servers, dispatches functions, and coordinates the training. We refer to this task as “coordinator”. The coordinator uses a tf.distribute.experimental.coordinator.ClusterCoordinator
to coordinate the cluster, and a tf.distribute.experimental.ParameterServerStrategy
to define variables on parameter servers and computation on workers.
For the training to work, the coordinator dispatches tf.function
s to be executed on remote workers. Upon receiving requests from the coordinator, a worker executes the tf.function
by reading the variables from parameter servers, executing the ops, and updating the variables on the parameter servers. Each of the worker only processes the requests from the coordinator, and communicates with parameter servers, without direct interactions with other workers in the cluster.
As a result, failures of some workers do not prevent the cluster from continuing the work, and this allows the cluster to train with instances that can be occasionally unavailable (e.g. preemptible or spot instances). The coordinator and parameter servers though, must be available at all times for the cluster to make progress.
Note that the coordinator is not one of the training workers. Instead, it creates resources such as variables and datasets, dispatchs tf.function
s, saving checkpoints and so on. In addition to workers, parameter servers and the coordinator, an optional evaluator can be run on the side that periodically reads the checkpoints saved by the coordinator and runs evaluations against each checkpoint.
tf.distribute.experimental.ParameterServerStrategy
has to work in conjunction with a tf.distribute.experimental.coordinator.ClusterCoordinator
object. Standalone usage of tf.distribute.experimental.ParameterServerStrategy
without central coordination is not supported at this time.
Example code for coordinator
Here's an example usage of the API, with a custom training loop to train a model. This code snippet is intended to be run on (the only) one task that is designated as the coordinator. Note that cluster_resolver
, variable_partitioner
, and dataset_fn
arguments are explained in the following "Cluster setup", "Variable partitioning", and "Dataset preparation" sections.
# Set the environment variable to allow reporting worker and ps failure to the # coordinator. This a short-term workaround. os.environ["GRPC_FAIL_FAST"] = "use_caller" # Prepare a strategy to use with the cluster and variable partitioning info. strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=..., variable_partitioner=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy=strategy) # Prepare a distribute dataset that will place datasets on the workers. distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn=...) with strategy.scope(): model = ... optimizer, metrics = ... # Keras optimizer/metrics are great choices checkpoint = tf.train.Checkpoint(model=model, optimizer=optimizer) checkpoint_manager = tf.train.CheckpointManager( checkpoint, checkpoint_dir, max_to_keep=2) # `load_checkpoint` infers initial epoch from `optimizer.iterations`. initial_epoch = load_checkpoint(checkpoint_manager) or 0 @tf.function def worker_fn(iterator): def replica_fn(inputs): batch_data, labels = inputs # calculate gradient, applying gradient, metrics update etc. strategy.run(replica_fn, args=(next(iterator),)) for epoch in range(initial_epoch, num_epoch): distributed_iterator = iter(distributed_dataset) # Reset iterator state. for step in range(steps_per_epoch): # Asynchronously schedule the `worker_fn` to be executed on an arbitrary # worker. This call returns immediately. coordinator.schedule(worker_fn, args=(distributed_iterator,)) # `join` blocks until all scheduled `worker_fn`s finish execution. Once it # returns, we can read the metrics and save checkpoints as needed. coordinator.join() logging.info('Metric result: %r', metrics.result()) train_accuracy.reset_states() checkpoint_manager.save()
Example code for worker and parameter servers
In addition to the coordinator, there should be tasks designated as "worker" or "ps". They should run the following code to start a TensorFlow server, waiting for coordinator's requests:
# Set the environment variable to allow reporting worker and ps failure to the # coordinator. os.environ["GRPC_FAIL_FAST"] = "use_caller" # Provide a `tf.distribute.cluster_resolver.ClusterResolver` that serves # the cluster information. See below "Cluster setup" section. cluster_resolver = ... server = tf.distribute.Server( cluster_resolver.cluster_spec(), job_name=cluster_resolver.task_type, task_index=cluster_resolver.task_id, protocol="grpc") # Blocking the process that starts a server from exiting. server.join()
Cluster setup
In order for the tasks in the cluster to know other tasks' addresses, a tf.distribute.cluster_resolver.ClusterResolver
is required to be used in coordinator, worker, and ps. The tf.distribute.cluster_resolver.ClusterResolver
is responsible for providing the cluster information, as well as the task type and id of the current task. See tf.distribute.cluster_resolver.ClusterResolver
for more information.
If TF_CONFIG
environment variable is set, a tf.distribute.cluster_resolver.TFConfigClusterResolver
should be used as well. Note that for legacy reason, on some platform, "chief" is used as the task type for the coordinator, as the following example demonstrates. Here we set TF_CONFIG
for the task designated as a parameter server (task type "ps") and index 1 (the second task), in a cluster with 1 chief, 2 parameter servers, and 3 workers. Note that the it needs to be set before the use of tf.distribute.cluster_resolver.TFConfigClusterResolver
.
Example code for cluster setup:
os.environ['TF_CONFIG'] = ''' { "cluster": { "chief": ["chief.example.com:2222"], "ps": ["ps0.example.com:2222", "ps1.example.com:2222"], "worker": ["worker0.example.com:2222", "worker1.example.com:2222", "worker2.example.com:2222"] }, "task": { "type": "ps", "index": 1 } } '''
If you prefer to run the same binary for all tasks, you will need to let the binary branch into different roles at the beginning of the program:
os.environ["GRPC_FAIL_FAST"] = "use_caller" cluster_resolver = tf.distribute.cluster_resolver.TFConfigClusterResolver() # If coordinator, create a strategy and start the training program. if cluster_resolver.task_type == 'chief': strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver) ... # If worker/ps, create a server elif cluster_resolver.task_type in ("worker", "ps"): server = tf.distribute.Server(...) ...
Alternatively, you can also start a bunch of TensorFlow servers in advance and connect to them later. The coordinator can be in the same cluster or on any machine that has connectivity to workers and parameter server. This is covered in our guide and tutorial.
Variable creation with strategy.scope()
tf.distribute.experimental.ParameterServerStrategy
follows the tf.distribute
API contract where variable creation is expected to be inside the context manager returned by strategy.scope()
, in order to be correctly placed on parameter servers in a round-robin manner:
# In this example, we're assuming having 3 ps. strategy = tf.distribute.experimental.ParameterServerStrategy( cluster_resolver=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator( strategy=strategy) # Variables should be created inside scope to be placed on parameter servers. # If created outside scope such as `v1` here, it would be placed on the # coordinator. v1 = tf.Variable(initial_value=0.0) with strategy.scope(): v2 = tf.Variable(initial_value=1.0) v3 = tf.Variable(initial_value=2.0) v4 = tf.Variable(initial_value=3.0) v5 = tf.Variable(initial_value=4.0) # v2 through v5 are created in scope and are distributed on parameter servers. # Default placement is round-robin but the order should not be relied on. assert v2.device == "/job:ps/replica:0/task:0/device:CPU:0" assert v3.device == "/job:ps/replica:0/task:1/device:CPU:0" assert v4.device == "/job:ps/replica:0/task:2/device:CPU:0" assert v5.device == "/job:ps/replica:0/task:0/device:CPU:0"
See distribute.Strategy.scope
for more information.
Variable partitioning
Having dedicated servers to store variables means being able to divide up, or "shard" the variables across the ps. Partitioning large variable among ps is a commonly used technique to boost training throughput and mitigate memory constraints. It enables parallel computations and updates on different shards of a variable, and often yields better load balancing across parameter servers . Without sharding, models with large variables (e.g, embeddings) that can't fit into one machine's memory would otherwise be unable to train.
With tf.distribute.experimental.ParameterServerStrategy
, if a variable_partitioner
is provided to __init__
and certain conditions are satisfied, the resulting variables created in scope are sharded across the parameter servers, in a round-robin fashion. The variable reference returned from tf.Variable
becomes a type that serves as the container of the sharded variables. One can access variables
attribute of this container for the actual variable components. If building model with tf.Module
or Keras, the variable components are collected in the variables
alike attributes.
class Dense(tf.Module): def __init__(self, name=None): super().__init__(name=name) self.w = tf.Variable(tf.random.normal([100, 10]), name='w') def __call__(self, x): return x * self.w # Partition the dense layer into 2 shards. variable_partitioiner = ( tf.distribute.experimental.partitioners.FixedShardsPartitioner( num_shards = 2)) strategy = ParameterServerStrategy(cluster_resolver=..., variable_partitioner = variable_partitioner) with strategy.scope(): dense = Dense() assert len(dense.variables) == 2 assert isinstance(dense.variables[0], tf.Variable) assert isinstance(dense.variables[1], tf.Variable) assert dense.variables[0].name == "w/part_0" assert dense.variables[1].name == "w/part_1"
The sharded variable container can be converted to a Tensor
via tf.convert_to_tensor
. This means the container can be directly used in most Python Ops where such Tensor
convertion automatically happens. For example in the above code snippet, x * self.w
would implicitly apply the said tensor convertion. Note that such convertion can be expensive, as the variable components need to be transferred from multiple parameter servers to where the value is used.
tf.nn.embedding_lookup
on the other hand doesn't apply the tensor convertion , and performs parallel lookups on the variable components instead. This is crutial to scale up embedding lookups when the embedding table variable is large.
When a partitioned variable is saved to SavedModel
, it will be saved as if it is one single variable. This improves serving efficiency by eliminating a number of Ops that handle the partiton aspects.
Known limitations of variable partitioning:
Number of parttions must not change across Checkpoint save/load.
After saving partitioned variables to a SavedModel, the SavedModel can't be loaded via tf.saved_model.load
.
Partition variable doesn't directly work with tf.GradientTape
, please use the variables
attributes to get the actual variable components and use them in gradient APIs instead.
Dataset preparation
With tf.distribute.experimental.ParameterServerStrategy
, a dataset is created in each of the workers to be used for training. This is done by creating a dataset_fn
that takes no argument and returns a tf.data.Dataset
, and passing the dataset_fn
into tf.distribute.experimental.coordinator. ClusterCoordinator.create_per_worker_dataset
. We recommend the dataset to be shuffled and repeated to have the examples run through the training as evenly as possible.
def dataset_fn(): filenames = ... dataset = tf.data.Dataset.from_tensor_slices(filenames) # Dataset is recommended to be shuffled, and repeated. return dataset.shuffle(buffer_size=...).repeat().batch(batch_size=...) coordinator = tf.distribute.experimental.coordinator.ClusterCoordinator(strategy=...) distributed_dataset = coordinator.create_per_worker_dataset(dataset_fn)
Limitations
tf.distribute.experimental.ParameterServerStrategy
in TF2 is experimental, and the API is subject to further changes.
tf.distribute.experimental.ParameterServerStrategy
does not yet support training with GPU(s). This is a feature request being developed.
tf.distribute.experimental.ParameterServerStrategy
only supports custom training loop API currently in TF2. Usage of it with Keras compile
/fit
API is being developed.
tf.distribute.experimental.ParameterServerStrategy
must be used with tf.distribute.experimental.coordinator.ClusterCoordinator
.
Args | |
---|---|
cluster_resolver | a tf.distribute.cluster_resolver.ClusterResolver object. |
variable_partitioner | a distribute.experimental.partitioners.Partitioner that specifies how to partition variables. If None , variables will not be partitioned.
|
Attributes | |
---|---|
cluster_resolver | Returns the cluster resolver associated with this strategy. In general, when using a multi-worker Strategies that intend to have an associated Single-worker strategies usually do not have a The os.environ['TF_CONFIG'] = json.dumps({ 'cluster': { 'worker': ["localhost:12345", "localhost:23456"], 'ps': ["localhost:34567"] }, 'task': {'type': 'worker', 'index': 0} }) # This implicitly uses TF_CONFIG for the cluster and current task info. strategy = tf.distribute.experimental.MultiWorkerMirroredStrategy() ... if strategy.cluster_resolver.task_type == 'worker': # Perform something that's only applicable on workers. Since we set this # as a worker above, this block will run on this particular instance. elif strategy.cluster_resolver.task_type == 'ps': # Perform something that's only applicable on parameter servers. Since we # set this as a worker above, this block will not run on this particular # instance. For more information, please see |
extended | tf.distribute.StrategyExtended with additional methods. |
num_replicas_in_sync | Returns number of replicas over which gradients are aggregated. |
distribute_datasets_from_function
distribute_datasets_from_function( dataset_fn, options=None )
Distributes tf.data.Dataset
instances created by calls to dataset_fn
.
The argument dataset_fn
that users pass in is an input function that has a tf.distribute.InputContext
argument and returns a tf.data.Dataset
instance. It is expected that the returned dataset from dataset_fn
is already batched by per-replica batch size (i.e. global batch size divided by the number of replicas in sync) and sharded. tf.distribute.Strategy.distribute_datasets_from_function
does not batch or shard the tf.data.Dataset
instance returned from the input function. dataset_fn
will be called on the CPU device of each of the workers and each generates a dataset where every replica on that worker will dequeue one batch of inputs (i.e. if a worker has two replicas, two batches will be dequeued from the Dataset
every step).
This method can be used for several purposes. First, it allows you to specify your own batching and sharding logic. (In contrast, tf.distribute.experimental_distribute_dataset
does batching and sharding for you.) For example, where experimental_distribute_dataset
is unable to shard the input files, this method might be used to manually shard the dataset (avoiding the slow fallback behavior in experimental_distribute_dataset
). In cases where the dataset is infinite, this sharding can be done by creating dataset replicas that differ only in their random seed.
The dataset_fn
should take an tf.distribute.InputContext
instance where information about batching and input replication can be accessed.
You can use element_spec
property of the tf.distribute.DistributedDataset
returned by this API to query the tf.TypeSpec
of the elements returned by the iterator. This can be used to set the input_signature
property of a tf.function
. Follow tf.distribute.DistributedDataset.element_spec
to see an example.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when usingtf.distribute.Strategy.experimental_distribute_dataset
ortf.distribute.Strategy.distribute_datasets_from_function
is not guaranteed. This is typically required if you are usingtf.distribute
to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported withtf.distribute.experimental_distribute_dataset
ortf.distribute.distribute_datasets_from_function
. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has amap_fn
that usestf.random.uniform
to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input). If you are interested in last partial batch handling, read this section.
Args | |
---|---|
dataset_fn | A function taking a tf.distribute.InputContext instance and returning a tf.data.Dataset . |
options | tf.distribute.InputOptions used to control options on how this dataset is distributed. |
Returns | |
---|---|
A tf.distribute.DistributedDataset . |
experimental_distribute_dataset
experimental_distribute_dataset( dataset, options=None )
Creates tf.distribute.DistributedDataset
from tf.data.Dataset
.
The returned tf.distribute.DistributedDataset
can be iterated over similar to regular datasets. NOTE: The user cannot add any more transformations to a tf.distribute.DistributedDataset
. You can only create an iterator or examine the tf.TypeSpec
of the data generated by it. See API docs of tf.distribute.DistributedDataset
to learn more.
The following is an example:
global_batch_size = 2 # Passing the devices is optional. strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) # Create a dataset dataset = tf.data.Dataset.range(4).batch(global_batch_size) # Distribute that dataset dist_dataset = strategy.experimental_distribute_dataset(dataset) @tf.function def replica_fn(input): return input*2 result = [] # Iterate over the `tf.distribute.DistributedDataset` for x in dist_dataset: # process dataset elements result.append(strategy.run(replica_fn, args=(x,))) print(result) [PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([0])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([2])> }, PerReplica:{ 0: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([4])>, 1: <tf.Tensor: shape=(1,), dtype=int64, numpy=array([6])> }]
Three key actions happending under the hood of this method are batching, sharding, and prefetching.
In the code snippet above, dataset
is batched by global_batch_size
, and calling experimental_distribute_dataset
on it rebatches dataset
to a new batch size that is equal to the global batch size divided by the number of replicas in sync. We iterate through it using a Pythonic for loop. x
is a tf.distribute.DistributedValues
containing data for all replicas, and each replica gets data of the new batch size. tf.distribute.Strategy.run
will take care of feeding the right per-replica data in x
to the right replica_fn
executed on each replica.
Sharding contains autosharding across multiple workers and within every worker. First, in multi-worker distributed training (i.e. when you use tf.distribute.experimental.MultiWorkerMirroredStrategy
or tf.distribute.TPUStrategy
), autosharding a dataset over a set of workers means that each worker is assigned a subset of the entire dataset (if the right tf.data.experimental.AutoShardPolicy
is set). This is to ensure that at each step, a global batch size of non-overlapping dataset elements will be processed by each worker. Autosharding has a couple of different options that can be specified using tf.data.experimental.DistributeOptions
. Then, sharding within each worker means the method will split the data among all the worker devices (if more than one a present). This will happen regardless of multi-worker autosharding.
Note: for autosharding across multiple workers, the default mode istf.data.experimental.AutoShardPolicy.AUTO
. This mode will attempt to shard the input dataset by files if the dataset is being created out of reader datasets (e.g.tf.data.TFRecordDataset
,tf.data.TextLineDataset
, etc.) or otherwise shard the dataset by data, where each of the workers will read the entire dataset and only process the shard assigned to it. However, if you have less than one input file per worker, we suggest that you disable dataset autosharding across workers by setting thetf.data.experimental.DistributeOptions.auto_shard_policy
to betf.data.experimental.AutoShardPolicy.OFF
.
By default, this method adds a prefetch transformation at the end of the user provided tf.data.Dataset
instance. The argument to the prefetch transformation which is buffer_size
is equal to the number of replicas in sync.
If the above batch splitting and dataset sharding logic is undesirable, please use tf.distribute.Strategy.distribute_datasets_from_function
instead, which does not do any automatic batching or sharding for you.
Note: If you are using TPUStrategy, the order in which the data is processed by the workers when usingtf.distribute.Strategy.experimental_distribute_dataset
ortf.distribute.Strategy.distribute_datasets_from_function
is not guaranteed. This is typically required if you are usingtf.distribute
to scale prediction. You can however insert an index for each element in the batch and order outputs accordingly. Refer to this snippet for an example of how to order outputs.
Note: Stateful dataset transformations are currently not supported withtf.distribute.experimental_distribute_dataset
ortf.distribute.distribute_datasets_from_function
. Any stateful ops that the dataset may have are currently ignored. For example, if your dataset has amap_fn
that usestf.random.uniform
to rotate an image, then you have a dataset graph that depends on state (i.e the random seed) on the local machine where the python process is being executed.
For a tutorial on more usage and properties of this method, refer to the tutorial on distributed input. If you are interested in last partial batch handling, read this section.
Args | |
---|---|
dataset | tf.data.Dataset that will be sharded across all replicas using the rules stated above. |
options | tf.distribute.InputOptions used to control options on how this dataset is distributed. |
Returns | |
---|---|
A tf.distribute.DistributedDataset . |
experimental_distribute_values_from_function
experimental_distribute_values_from_function( value_fn )
Generates tf.distribute.DistributedValues
from value_fn
.
This function is to generate tf.distribute.DistributedValues
to pass into run
, reduce
, or other methods that take distributed values when not using datasets.
Args | |
---|---|
value_fn | The function to run to generate values. It is called for each replica with tf.distribute.ValueContext as the sole argument. It must return a Tensor or a type that can be converted to a Tensor. |
Returns | |
---|---|
A tf.distribute.DistributedValues containing a value for each replica. |
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return tf.constant(1.) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (<tf.Tensor: shape=(), dtype=float32, numpy=1.0>, <tf.Tensor: shape=(), dtype=float32, numpy=1.0>)
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) array_value = np.array([3., 2., 1.]) def value_fn(ctx): return array_value[ctx.replica_id_in_sync_group] distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (3.0, 2.0)
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def value_fn(ctx): return ctx.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) local_result = strategy.experimental_local_results(distributed_values) local_result (2, 2)
strategy = tf.distribute.TPUStrategy() worker_devices = strategy.extended.worker_devices multiple_values = [] for i in range(strategy.num_replicas_in_sync): with tf.device(worker_devices[i]): multiple_values.append(tf.constant(1.0)) def value_fn(ctx): return multiple_values[ctx.replica_id_in_sync_group] distributed_values = strategy. experimental_distribute_values_from_function( value_fn)
experimental_local_results
experimental_local_results( value )
Returns the list of all local per-replica values contained in value
.
Note: This only returns values on the worker initiated by this client. When using atf.distribute.Strategy
liketf.distribute.experimental.MultiWorkerMirroredStrategy
, each worker will be its own client, and this function will only return values computed on that worker.
Args | |
---|---|
value | A value returned by experimental_run() , run() , extended.call_for_each_replica() , or a variable created in scope . |
Returns | |
---|---|
A tuple of values contained in value . If value represents a single value, this returns (value,). |
gather
gather( value, axis )
Gather value
across replicas along axis
to the current device.
Given a tf.distribute.DistributedValues
or tf.Tensor
-like object value
, this API gathers and concatenates value
across replicas along the axis
-th dimension. The result is copied to the "current" device
tf.distribute.TPUStrategy
, it is the first TPU host. For multi-client MultiWorkerMirroredStrategy
, this is CPU of each worker.This API can only be called in the cross-replica context. For a counterpart in the replica context, see tf.distribute.ReplicaContext.all_gather
.
Note: For all strategies excepttf.distribute.TPUStrategy
, the inputvalue
on different replicas must have the same rank, and their shapes must be the same in all dimensions except theaxis
-th dimension. In other words, their shapes cannot be different in a dimensiond
whered
does not equal to theaxis
argument. For example, given atf.distribute.DistributedValues
with component tensors of shape(1, 2, 3)
and(1, 3, 3)
on two replicas, you can callgather(..., axis=1, ...)
on it, but notgather(..., axis=0, ...)
orgather(..., axis=2, ...)
. However, fortf.distribute.TPUStrategy.gather
, all tensors must have exactly the same rank and same shape.
Note: Given atf.distribute.DistributedValues
value
, its component tensors must have a non-zero rank. Otherwise, consider usingtf.expand_dims
before gathering them.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # A DistributedValues with component tensor of shape (2, 1) on each replica distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(tf.constant([[1], [2]]))) @tf.function def run(): return strategy.gather(distributed_values, axis=0) run() <tf.Tensor: shape=(4, 1), dtype=int32, numpy= array([[1], [2], [1], [2]], dtype=int32)>
Consider the following example for more combinations:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1", "GPU:2", "GPU:3"]) single_tensor = tf.reshape(tf.range(6), shape=(1,2,3)) distributed_values = strategy.experimental_distribute_values_from_function(lambda _: tf.identity(single_tensor)) @tf.function def run(axis): return strategy.gather(distributed_values, axis=axis) axis=0 run(axis) <tf.Tensor: shape=(4, 2, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]], [[0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=1 run(axis) <tf.Tensor: shape=(1, 8, 3), dtype=int32, numpy= array([[[0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5], [0, 1, 2], [3, 4, 5]]], dtype=int32)> axis=2 run(axis) <tf.Tensor: shape=(1, 2, 12), dtype=int32, numpy= array([[[0, 1, 2, 0, 1, 2, 0, 1, 2, 0, 1, 2], [3, 4, 5, 3, 4, 5, 3, 4, 5, 3, 4, 5]]], dtype=int32)>
Args | |
---|---|
value | a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run , to be combined into a single tensor. It can also be a regular tensor when used with tf.distribute.OneDeviceStrategy or the default strategy. The tensors that constitute the DistributedValues can only be dense tensors with non-zero rank, NOT a tf.IndexedSlices . |
axis | 0-D int32 Tensor. Dimension along which to gather. Must be in the range [0, rank(value)). |
Returns | |
---|---|
A Tensor that's the concatenation of value across replicas along axis dimension. |
reduce
reduce( reduce_op, value, axis )
Reduce value
across replicas and return result on current device.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) total = strategy.reduce("SUM", per_replica_result, axis=None) total <tf.Tensor: shape=(), dtype=int32, numpy=1>
To see how this would look with multiple replicas, consider the same example with MirroredStrategy with 2 GPUs:
strategy = tf.distribute.MirroredStrategy(devices=["GPU:0", "GPU:1"]) def step_fn(): i = tf.distribute.get_replica_context().replica_id_in_sync_group return tf.identity(i) per_replica_result = strategy.run(step_fn) # Check devices on which per replica result is: strategy.experimental_local_results(per_replica_result)[0].device # /job:localhost/replica:0/task:0/device:GPU:0 strategy.experimental_local_results(per_replica_result)[1].device # /job:localhost/replica:0/task:0/device:GPU:1 total = strategy.reduce("SUM", per_replica_result, axis=None) # Check device on which reduced result is: total.device # /job:localhost/replica:0/task:0/device:CPU:0
This API is typically used for aggregating the results returned from different replicas, for reporting etc. For example, loss computed from different replicas can be averaged using this API before printing.
Note: The result is copied to the "current" device - which would typically be the CPU of the worker on which the program is running. ForTPUStrategy
, it is the first TPU host. For multi clientMultiWorkerMirroredStrategy
, this is CPU of each worker.
There are a number of different tf.distribute APIs for reducing values across replicas:
tf.distribute.ReplicaContext.all_reduce
: This differs from Strategy.reduce
in that it is for replica context and does not copy the results to the host device. all_reduce
should be typically used for reductions inside the training step such as gradients.tf.distribute.StrategyExtended.reduce_to
and tf.distribute.StrategyExtended.batch_reduce_to
: These APIs are more advanced versions of Strategy.reduce
as they allow customizing the destination of the result. They are also called in cross replica context.What should axis be?
Given a per-replica value returned by run
, say a per-example loss, the batch will be divided across all the replicas. This function allows you to aggregate across replicas and optionally also across batch elements by specifying the axis parameter accordingly.
For example, if you have a global batch size of 8 and 2 replicas, values for examples [0, 1, 2, 3]
will be on replica 0 and [4, 5, 6, 7]
will be on replica 1. With axis=None
, reduce
will aggregate only across replicas, returning [0+4, 1+5, 2+6, 3+7]
. This is useful when each replica is computing a scalar or some other value that doesn't have a "batch" dimension (like a gradient or loss).
strategy.reduce("sum", per_replica_result, axis=None)
Sometimes, you will want to aggregate across both the global batch and all replicas. You can get this behavior by specifying the batch dimension as the axis
, typically axis=0
. In this case it would return a scalar 0+1+2+3+4+5+6+7
.
strategy.reduce("sum", per_replica_result, axis=0)
If there is a last partial batch, you will need to specify an axis so that the resulting shape is consistent across replicas. So if the last batch has size 6 and it is divided into [0, 1, 2, 3] and [4, 5], you would get a shape mismatch unless you specify axis=0
. If you specify tf.distribute.ReduceOp.MEAN
, using axis=0
will use the correct denominator of 6. Contrast this with computing reduce_mean
to get a scalar value on each replica and this function to average those means, which will weigh some values 1/8
and others 1/4
.
Args | |
---|---|
reduce_op | a tf.distribute.ReduceOp value specifying how values should be combined. Allows using string representation of the enum such as "SUM", "MEAN". |
value | a tf.distribute.DistributedValues instance, e.g. returned by Strategy.run , to be combined into a single tensor. It can also be a regular tensor when used with OneDeviceStrategy or default strategy. |
axis | specifies the dimension to reduce along within each replica's tensor. Should typically be set to the batch dimension, or None to only reduce across replicas (e.g. if the tensor has no batch dimension). |
Returns | |
---|---|
A Tensor . |
run
run( fn, args=(), kwargs=None, options=None )
Invokes fn
on each replica, with the given arguments.
This method is the primary way to distribute your computation with a tf.distribute object. It invokes fn
on each replica. If args
or kwargs
have tf.distribute.DistributedValues
, such as those produced by a tf.distribute.DistributedDataset
from tf.distribute.Strategy.experimental_distribute_dataset
or tf.distribute.Strategy.distribute_datasets_from_function
, when fn
is executed on a particular replica, it will be executed with the component of tf.distribute.DistributedValues
that correspond to that replica.
fn
is invoked under a replica context. fn
may call tf.distribute.get_replica_context()
to access members such as all_reduce
. Please see the module-level docstring of tf.distribute for the concept of replica context.
All arguments in args
or kwargs
should either be Python values of a nested structure of tensors, e.g. a list of tensors, in which case args
and kwargs
will be passed to the fn
invoked on each replica. Or args
or kwargs
can be tf.distribute.DistributedValues
containing tensors or composite tensors, i.e. tf.compat.v1.TensorInfo.CompositeTensor
, in which case each fn
call will get the component of a tf.distribute.DistributedValues
corresponding to its replica.
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) tensor_input = tf.constant(3.0) @tf.function def replica_fn(input): return input*2.0 result = strategy.run(replica_fn, args=(tensor_input,)) result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=float32, numpy=6.0>, 1: <tf.Tensor: shape=(), dtype=float32, numpy=6.0> }
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) @tf.function def run(): def value_fn(value_context): return value_context.num_replicas_in_sync distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn2(input): return input*2 return strategy.run(replica_fn2, args=(distributed_values,)) result = run() result <tf.Tensor: shape=(), dtype=int32, numpy=4>
tf.distribute.ReplicaContext
to allreduce values.strategy = tf.distribute.MirroredStrategy(["gpu:0", "gpu:1"]) @tf.function def run(): def value_fn(value_context): return tf.constant(value_context.replica_id_in_sync_group) distributed_values = ( strategy.experimental_distribute_values_from_function( value_fn)) def replica_fn(input): return tf.distribute.get_replica_context().all_reduce("sum", input) return strategy.run(replica_fn, args=(distributed_values,)) result = run() result PerReplica:{ 0: <tf.Tensor: shape=(), dtype=int32, numpy=1>, 1: <tf.Tensor: shape=(), dtype=int32, numpy=1> }
Args | |
---|---|
fn | The function to run on each replica. |
args | Optional positional arguments to fn . Its element can be a Python value, a tensor or a tf.distribute.DistributedValues . |
kwargs | Optional keyword arguments to fn . Its element can be a Python value, a tensor or a tf.distribute.DistributedValues . |
options | An optional instance of tf.distribute.RunOptions specifying the options to run fn . |
Returns | |
---|---|
Merged return value of fn across replicas. The structure of the return value is the same as the return value from fn . Each element in the structure can either be tf.distribute.DistributedValues , Tensor objects, or Tensor s (for example, if running on a single replica). |
scope
scope()
Context manager to make the strategy current and distribute variables.
This method returns a context manager, and is used as follows:
strategy = tf.distribute.MirroredStrategy(["GPU:0", "GPU:1"]) # Variable created inside scope: with strategy.scope(): mirrored_variable = tf.Variable(1.) mirrored_variable MirroredVariable:{ 0: <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>, 1: <tf.Variable 'Variable/replica_1:0' shape=() dtype=float32, numpy=1.0> } # Variable created outside scope: regular_variable = tf.Variable(1.) regular_variable <tf.Variable 'Variable:0' shape=() dtype=float32, numpy=1.0>
What happens when Strategy.scope is entered?
strategy
is installed in the global context as the "current" strategy. Inside this scope, tf.distribute.get_strategy()
will now return this strategy. Outside this scope, it returns the default no-op strategy.tf.distribute.StrategyExtended
for an explanation on cross-replica and replica contexts.scope
is intercepted by the strategy. Each strategy defines how it wants to affect the variable creation. Sync strategies like MirroredStrategy
, TPUStrategy
and MultiWorkerMiroredStrategy
create variables replicated on each replica, whereas ParameterServerStrategy
creates variables on the parameter servers. This is done using a custom tf.variable_creator_scope
.MultiWorkerMiroredStrategy
, a default device scope of "/CPU:0" is entered on each worker.Note: Entering a scope does not automatically distribute a computation, except in the case of high level training framework like kerasmodel.fit
. If you're not usingmodel.fit
, you need to usestrategy.run
API to explicitly distribute that computation. See an example in the custom training loop tutorial.
What should be in scope and what should be outside?
There are a number of requirements on what needs to happen inside the scope. However, in places where we have information about which strategy is in use, we often enter the scope for the user, so they don't have to do it explicitly (i.e. calling those either inside or outside the scope is OK).
strategy.scope
. This can be either by directly putting it in scope, or relying on another API like strategy.run
or model.fit
to enter it for you. Any variable that is created outside scope will not be distributed and may have performance implications. Common things that create variables in TF: models, optimizers, metrics. These should always be created inside the scope. Another source of variable creation can be a checkpoint restore - when variables are created lazily. Note that any variable created inside a strategy captures the strategy information. So reading and writing to these variables outside the strategy.scope
can also work seamlessly, without the user having to enter the scope.strategy.run
and strategy.reduce
) which require to be in a strategy's scope, enter the scope for you automatically, which means when using those APIs you don't need to enter the scope yourself.tf.keras.Model
is created inside a strategy.scope
, we capture this information. When high level training frameworks methods such as model.compile
, model.fit
etc are then called on this model, we automatically enter the scope, as well as use this strategy to distribute the training etc. See detailed example in distributed keras tutorial. Note that simply calling the model(..)
is not impacted - only high level training framework APIs are. model.compile
, model.fit
, model.evaluate
, model.predict
and model.save
can all be called inside or outside the scope.tf.function
s that represent your training steptf.saved_model.save
. Loading creates variables, so that should go inside the scope if you want to train the model in a distributed way.checkpoint.restore
may sometimes need to be inside scope if it creates variables.Returns | |
---|---|
A context manager. |
© 2020 The TensorFlow Authors. All rights reserved.
Licensed under the Creative Commons Attribution License 3.0.
Code samples licensed under the Apache 2.0 License.
https://www.tensorflow.org/versions/r2.4/api_docs/python/tf/distribute/experimental/ParameterServerStrategy