torch.cuda.memory.get_allocator_backend
-
torch.cuda.memory.get_allocator_backend()[source] -
Return a string describing the active allocator backend as set by
PYTORCH_CUDA_ALLOC_CONF. Currently available backends arenative(PyTorch’s native caching allocator) andcudaMallocAsync`(CUDA’s built-in asynchronous allocator).Note
See Memory management for details on choosing the allocator backend.
- Return type