torch.cuda.amp provides convenience methods for mixed precision, where some operations use the
float) datatype and other operations use
half). Some ops, like linear layers and convolutions, are much faster in
float16. Other ops, like reductions, often require the dynamic range of
float32. Mixed precision tries to match each op to its appropriate datatype.
Ordinarily, “automatic mixed precision training” uses
torch.cuda.amp.GradScaler together, as shown in the Automatic Mixed Precision examples and Automatic Mixed Precision recipe. However,
GradScaler are modular, and may be used separately if desired.
autocast serve as context managers or decorators that allow regions of your script to run in mixed precision.
In these regions, CUDA ops run in an op-specific dtype chosen by autocast to improve performance while maintaining accuracy. See the Autocast Op Reference for details.
When entering an autocast-enabled region, Tensors may be any type. You should not call
.half() on your model(s) or inputs when using autocasting.
autocast should wrap only the forward pass(es) of your network, including the loss computation(s). Backward passes under autocast are not recommended. Backward ops run in the same type that autocast used for corresponding forward ops.
# Creates model and optimizer in default precision model = Net().cuda() optimizer = optim.SGD(model.parameters(), ...) for input, target in data: optimizer.zero_grad() # Enables autocasting for the forward pass (model + loss) with autocast(): output = model(input) loss = loss_fn(output, target) # Exits the context manager before backward() loss.backward() optimizer.step()
See the Automatic Mixed Precision examples for usage (along with gradient scaling) in more complex scenarios (e.g., gradient penalty, multiple models/losses, custom autograd functions).
autocast can also be used as a decorator, e.g., on the
forward method of your model:
class AutocastModel(nn.Module): ... @autocast() def forward(self, input): ...
Floating-point Tensors produced in an autocast-enabled region may be
float16. After returning to an autocast-disabled region, using them with floating-point Tensors of different dtypes may cause type mismatch errors. If so, cast the Tensor(s) produced in the autocast region back to
float32 (or other dtype if desired). If a Tensor from the autocast region is already
float32, the cast is a no-op, and incurs no additional overhead. Example:
# Creates some tensors in default dtype (here assumed to be float32) a_float32 = torch.rand((8, 8), device="cuda") b_float32 = torch.rand((8, 8), device="cuda") c_float32 = torch.rand((8, 8), device="cuda") d_float32 = torch.rand((8, 8), device="cuda") with autocast(): # torch.mm is on autocast's list of ops that should run in float16. # Inputs are float32, but the op runs in float16 and produces float16 output. # No manual casts are required. e_float16 = torch.mm(a_float32, b_float32) # Also handles mixed input types f_float16 = torch.mm(d_float32, e_float16) # After exiting autocast, calls f_float16.float() to use with d_float32 g_float32 = torch.mm(d_float32, f_float16.float())
Type mismatch errors in an autocast-enabled region are a bug; if this is what you observe, please file an issue.
autocast(enabled=False) subregions can be nested in autocast-enabled regions. Locally disabling autocast can be useful, for example, if you want to force a subregion to run in a particular
dtype. Disabling autocast gives you explicit control over the execution type. In the subregion, inputs from the surrounding region should be cast to
dtype before use:
# Creates some tensors in default dtype (here assumed to be float32) a_float32 = torch.rand((8, 8), device="cuda") b_float32 = torch.rand((8, 8), device="cuda") c_float32 = torch.rand((8, 8), device="cuda") d_float32 = torch.rand((8, 8), device="cuda") with autocast(): e_float16 = torch.mm(a_float32, b_float32) with autocast(enabled=False): # Calls e_float16.float() to ensure float32 execution # (necessary because e_float16 was created in an autocasted region) f_float32 = torch.mm(c_float32, e_float16.float()) # No manual casts are required when re-entering the autocast-enabled region. # torch.mm again runs in float16 and produces float16 output, regardless of input types. g_float16 = torch.mm(d_float32, f_float32)
The autocast state is thread-local. If you want it enabled in a new thread, the context manager or decorator must be invoked in that thread. This affects
torch.nn.parallel.DistributedDataParallel when used with more than one GPU per process (see Working with Multiple GPUs).
enabled (bool, optional, default=True) – Whether autocasting should be enabled in the region.
torch.dtype or None, optional, default=None) – If not
forward runs in an autocast-enabled region, casts incoming floating-point CUDA Tensors to the target dtype (non-floating-point Tensors are not affected), then executes
forward with autocast disabled. If
forward’s internal ops execute with the current autocast state.
If the decorated
forward is called outside an autocast-enabled region,
custom_fwd is a no-op and
cast_inputs has no effect.
Helper decorator for backward methods of custom autograd functions (subclasses of
torch.autograd.Function). Ensures that
backward executes with the same autocast state as
forward. See the example page for more detail.
If the forward pass for a particular op has
float16 inputs, the backward pass for that op will produce
float16 gradients. Gradient values with small magnitudes may not be representable in
float16. These values will flush to zero (“underflow”), so the update for the corresponding parameters will be lost.
To prevent underflow, “gradient scaling” multiplies the network’s loss(es) by a scale factor and invokes a backward pass on the scaled loss(es). Gradients flowing backward through the network are then scaled by the same factor. In other words, gradient values have a larger magnitude, so they don’t flush to zero.
Each parameter’s gradient (
.grad attribute) should be unscaled before the optimizer updates the parameters, so the scale factor does not interfere with the learning rate.
class torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)
Returns a Python float containing the current scale, or 1.0 if scaling is disabled.
get_scale() incurs a CPU-GPU sync.
Loads the scaler state. If this instance is disabled,
load_state_dict() is a no-op.
Multiplies (‘scales’) a tensor or list of tensors by the scale factor.
Returns scaled outputs. If this instance of
GradScaler is not enabled, outputs are returned unmodified.
outputs (Tensor or iterable of Tensors) – Outputs to scale.
new_scale (float) – Value to use as the new scale backoff factor.
new_scale (float) – Value to use as the new scale growth factor.
new_interval (int) – Value to use as the new growth interval.
Returns the state of the scaler as a
dict. It contains five entries:
"scale"- a Python float containing the current scale
"growth_factor"- a Python float containing the current growth factor
"backoff_factor"- a Python float containing the current backoff factor
"growth_interval"- a Python int containing the current growth interval
"_growth_tracker"- a Python int containing the number of recent consecutive unskipped steps.
If this instance is not enabled, returns an empty dict.
step(optimizer, *args, **kwargs)
step() carries out the following two operations:
unscale_()was explicitly called for
optimizerearlier in the iteration). As part of the
unscale_(), gradients are checked for infs/NaNs.
optimizer.step()using the unscaled gradients. Otherwise,
optimizer.step()is skipped to avoid corrupting the params.
**kwargs are forwarded to
Returns the return value of
Closure use is not currently supported.
Divides (“unscales”) the optimizer’s gradient tensors by the scale factor.
unscale_() is optional, serving cases where you need to modify or inspect gradients between the backward pass(es) and
unscale_() is not called explicitly, gradients will be unscaled automatically during
Simple example, using
unscale_() to enable clipping of unscaled gradients:
... scaler.scale(loss).backward() scaler.unscale_(optimizer) torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) scaler.step(optimizer) scaler.update()
optimizer (torch.optim.Optimizer) – Optimizer that owns the gradients to be unscaled.
unscale_() does not incur a CPU-GPU sync.
unscale_() should only be called once per optimizer per
step() call, and only after all gradients for that optimizer’s assigned parameters have been accumulated. Calling
unscale_() twice for a given optimizer between each
step() triggers a RuntimeError.
unscale_() may unscale sparse gradients out of place, replacing the
Updates the scale factor.
If any optimizer steps were skipped the scale is multiplied by
backoff_factor to reduce it. If
growth_interval unskipped iterations occurred consecutively, the scale is multiplied by
growth_factor to increase it.
new_scale sets the scale directly.
new_scale (float or
torch.cuda.FloatTensor, optional, default=None) – New scale factor.
update() should only be called at the end of the iteration, after
scaler.step(optimizer) has been invoked for all optimizers used this iteration.
Only CUDA ops are eligible for autocasting.
Ops that run in
float64 or non-floating-point dtypes are not eligible, and will run in these types whether or not autocast is enabled.
Only out-of-place ops and Tensor methods are eligible. In-place variants and calls that explicitly supply an
out=... Tensor are allowed in autocast-enabled regions, but won’t go through autocasting. For example, in an autocast-enabled region
a.addmm(b, c) can autocast, but
a.addmm_(b, c) and
a.addmm(b, c, out=d) cannot. For best performance and stability, prefer out-of-place ops in autocast-enabled regions.
Ops called with an explicit
dtype=... argument are not eligible, and will produce output that respects the
The following lists describe the behavior of eligible ops in autocast-enabled regions. These ops always go through autocasting whether they are invoked as part of a
torch.nn.Module, as a function, or as a
torch.Tensor method. If functions are exposed in multiple namespaces, they go through autocasting regardless of the namespace.
Ops not listed below do not go through autocasting. They run in the type defined by their inputs. However, autocasting may still change the type in which unlisted ops run if they’re downstream from autocasted ops.
If an op is unlisted, we assume it’s numerically stable in
float16. If you believe an unlisted op is numerically unstable in
float16, please file an issue.
These ops don’t require a particular dtype for stability, but take multiple inputs and require that the inputs’ dtypes match. If all of the inputs are
float16, the op runs in
float16. If any of the inputs is
float32, autocast casts all inputs to
float32 and runs the op in
Some ops not listed here (e.g., binary ops like
add) natively promote inputs without autocasting’s intervention. If inputs are a mixture of
float32, these ops run in
float32 and produce
float32 output, regardless of whether autocast is enabled.
The backward passes of
torch.nn.BCELoss, which wraps it) can produce gradients that aren’t representable in
float16. In autocast-enabled regions, the forward input may be
float16, which means the backward gradient must be representable in
float16 forward inputs to
float32 doesn’t help, because that cast must be reversed in backward). Therefore,
BCELoss raise an error in autocast-enabled regions.
Many models use a sigmoid layer right before the binary cross entropy layer. In this case, combine the two layers using
BCEWithLogits are safe to autocast.
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.