W3cubDocs

/PyTorch 2.9

NUMA Binding Utilities

Created On: Jul 25, 2025 | Last Updated On: Aug 12, 2025

class torch.numa.binding.AffinityMode(value) [source]

See behavior description for each affinity mode in torch.distributed.run.

class torch.numa.binding.NumaOptions(affinity_mode: torch.numa.binding.AffinityMode, should_fall_back_if_binding_fails: bool = False) [source]
affinity_mode: AffinityMode

If true, we will fall back to using the original command/entrypoint if we fail to compute or apply NUMA bindings.

You should avoid using this option! It is only intended as a safety mechanism for facilitating mass rollouts of numa binding.

torch.numa.binding.maybe_temporarily_apply_numa_binding_to_current_thread(*, gpu_index, numa_options) [source]

1. Applies NUMA binding to the current thread, suitable for the thread which will be interacting with GPU gpu_index. 2. Resets to the original CPU affinity before exiting the context manager.

Return type

Iterator[None]

© 2025, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://docs.pytorch.org/docs/2.9/elastic/numa.html