W3cubDocs

/PyTorch 2.9

torch.cuda.memory.reset_accumulated_memory_stats

torch.cuda.memory.reset_accumulated_memory_stats(device=None) [source]

Reset the “accumulated” (historical) stats tracked by the CUDA memory allocator.

See memory_stats() for details. Accumulated stats correspond to the “allocated” and “freed” keys in each individual stat dict, as well as “num_alloc_retries” and “num_ooms”.

Parameters

device (torch.device or int, optional) – selected device. Returns statistic for the current device, given by current_device(), if device is None (default).

Note

See Memory management for more details about GPU memory management.

© 2025, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://docs.pytorch.org/docs/2.9/generated/torch.cuda.memory.reset_accumulated_memory_stats.html