torch.utils.cpp_extension.CppExtension(name, sources, *args, **kwargs)
[source]
Creates a setuptools.Extension
for C++.
Convenience method that creates a setuptools.Extension
with the bare minimum (but often sufficient) arguments to build a C++ extension.
All arguments are forwarded to the setuptools.Extension
constructor.
>>> from setuptools import setup >>> from torch.utils.cpp_extension import BuildExtension, CppExtension >>> setup( name='extension', ext_modules=[ CppExtension( name='extension', sources=['extension.cpp'], extra_compile_args=['-g']), ], cmdclass={ 'build_ext': BuildExtension })
torch.utils.cpp_extension.CUDAExtension(name, sources, *args, **kwargs)
[source]
Creates a setuptools.Extension
for CUDA/C++.
Convenience method that creates a setuptools.Extension
with the bare minimum (but often sufficient) arguments to build a CUDA/C++ extension. This includes the CUDA include path, library path and runtime library.
All arguments are forwarded to the setuptools.Extension
constructor.
>>> from setuptools import setup >>> from torch.utils.cpp_extension import BuildExtension, CUDAExtension >>> setup( name='cuda_extension', ext_modules=[ CUDAExtension( name='cuda_extension', sources=['extension.cpp', 'extension_kernel.cu'], extra_compile_args={'cxx': ['-g'], 'nvcc': ['-O2']}) ], cmdclass={ 'build_ext': BuildExtension })
torch.utils.cpp_extension.BuildExtension(*args, **kwargs) → None
[source]
A custom setuptools
build extension .
This setuptools.build_ext
subclass takes care of passing the minimum required compiler flags (e.g. -std=c++14
) as well as mixed C++/CUDA compilation (and support for CUDA files in general).
When using BuildExtension
, it is allowed to supply a dictionary for extra_compile_args
(rather than the usual list) that maps from languages (cxx
or nvcc
) to a list of additional compiler flags to supply to the compiler. This makes it possible to supply different flags to the C++ and CUDA compiler during mixed compilation.
use_ninja
(bool): If use_ninja
is True
(default), then we attempt to build using the Ninja backend. Ninja greatly speeds up compilation compared to the standard setuptools.build_ext
. Fallbacks to the standard distutils backend if Ninja is not available.
Note
By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS
environment variable to a non-negative number.
torch.utils.cpp_extension.load(name, sources: List[str], extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda: Optional[bool] = None, is_python_module=True, keep_intermediates=True)
[source]
Loads a PyTorch C++ extension just-in-time (JIT).
To load an extension, a Ninja build file is emitted, which is used to compile the given sources into a dynamic library. This library is subsequently loaded into the current Python process as a module and returned from this function, ready for use.
By default, the directory to which the build file is emitted and the resulting library compiled to is <tmp>/torch_extensions/<name>
, where <tmp>
is the temporary folder on the current platform and <name>
the name of the extension. This location can be overridden in two ways. First, if the TORCH_EXTENSIONS_DIR
environment variable is set, it replaces <tmp>/torch_extensions
and all extensions will be compiled into subfolders of this directory. Second, if the build_directory
argument to this function is supplied, it overrides the entire path, i.e. the library will be compiled into that folder directly.
To compile the sources, the default system compiler (c++
) is used, which can be overridden by setting the CXX
environment variable. To pass additional arguments to the compilation process, extra_cflags
or extra_ldflags
can be provided. For example, to compile your extension with optimizations, pass extra_cflags=['-O3']
. You can also use extra_cflags
to pass further include directories.
CUDA support with mixed compilation is provided. Simply pass CUDA source files (.cu
or .cuh
) along with other sources. Such files will be detected and compiled with nvcc rather than the C++ compiler. This includes passing the CUDA lib64 directory as a library directory, and linking cudart
. You can pass additional flags to nvcc via extra_cuda_cflags
, just like with extra_cflags
for C++. Various heuristics for finding the CUDA install directory are used, which usually work fine. If not, setting the CUDA_HOME
environment variable is the safest option.
True
, turns on verbose logging of load steps.None
(default), this value is automatically determined based on the existence of .cu
or .cuh
in sources
. Set it to True`
to force CUDA headers and libraries to be included.True
(default), imports the produced shared library as a Python module. If False
, loads it into the process as a plain dynamic library.If is_python_module
is True
, returns the loaded PyTorch extension as a Python module. If is_python_module
is False
returns nothing (the shared library is loaded into the process as a side effect).
>>> from torch.utils.cpp_extension import load >>> module = load( name='extension', sources=['extension.cpp', 'extension_kernel.cu'], extra_cflags=['-O2'], verbose=True)
torch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True)
[source]
Loads a PyTorch C++ extension just-in-time (JIT) from string sources.
This function behaves exactly like load()
, but takes its sources as strings rather than filenames. These strings are stored to files in the build directory, after which the behavior of load_inline()
is identical to load()
.
See the tests for good examples of using this function.
Sources may omit two required parts of a typical non-inline C++ extension: the necessary header includes, as well as the (pybind11) binding code. More precisely, strings passed to cpp_sources
are first concatenated into a single .cpp
file. This file is then prepended with #include
<torch/extension.h>
.
Furthermore, if the functions
argument is supplied, bindings will be automatically generated for each function specified. functions
can either be a list of function names, or a dictionary mapping from function names to docstrings. If a list is given, the name of each function is used as its docstring.
The sources in cuda_sources
are concatenated into a separate .cu
file and prepended with torch/types.h
, cuda.h
and cuda_runtime.h
includes. The .cpp
and .cu
files are compiled separately, but ultimately linked into a single library. Note that no bindings are generated for functions in cuda_sources
per se. To bind to a CUDA kernel, you must create a C++ function that calls it, and either declare or define this C++ function in one of the cpp_sources
(and include its name in functions
).
See load()
for a description of arguments omitted below.
None
(default), this value is automatically determined based on whether cuda_sources
is provided. Set it to True
to force CUDA headers and libraries to be included.foo
is called via an intermediary _safe_foo
function. This redirection might cause issues in obscure cases of cpp. This flag should be set to False
when this redirect causes issues.>>> from torch.utils.cpp_extension import load_inline >>> source = \'\'\' at::Tensor sin_add(at::Tensor x, at::Tensor y) { return x.sin() + y.sin(); } \'\'\' >>> module = load_inline(name='inline_extension', cpp_sources=[source], functions=['sin_add'])
Note
By default, the Ninja backend uses #CPUS + 2 workers to build the extension. This may use up too many resources on some systems. One can control the number of workers by setting the MAX_JOBS
environment variable to a non-negative number.
torch.utils.cpp_extension.include_paths(cuda: bool = False) → List[str]
[source]
Get the include paths required to build a C++ or CUDA extension.
cuda – If True
, includes CUDA-specific include paths.
A list of include path strings.
torch.utils.cpp_extension.check_compiler_abi_compatibility(compiler) → bool
[source]
Verifies that the given compiler is ABI-compatible with PyTorch.
compiler (str) – The compiler executable name to check (e.g. g++
). Must be executable in a shell process.
False if the compiler is (likely) ABI-incompatible with PyTorch, else True.
torch.utils.cpp_extension.verify_ninja_availability()
[source]
Raises RuntimeError
if ninja build system is not available on the system, does nothing otherwise.
torch.utils.cpp_extension.is_ninja_available()
[source]
Returns True
if the ninja build system is available on the system, False
otherwise.
© 2019 Torch Contributors
Licensed under the 3-clause BSD License.
https://pytorch.org/docs/1.7.0/cpp_extension.html