torch.backends controls the behavior of various backends that PyTorch supports.
These backends include:
Returns whether PyTorch is built with CUDA support. Note that this doesn’t necessarily mean CUDA is available; just that if this PyTorch binary were run a machine with working CUDA drivers and devices, we would be able to use it.
cufft_plan_cache caches the cuFFT plans
int that shows the number of plans currently in the cuFFT plan cache.
int that controls cache capacity of cuFFT plan.
Clears the cuFFT plan cache.
bool that controls whether cuDNN is enabled.
bool that, if True, causes cuDNN to benchmark multiple convolution algorithms and select the fastest.
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.