torch.compiler API reference
Created On: Jun 02, 2023 | Last Updated On: Jun 22, 2025
For a quick overview of torch.compiler, see torch.compiler.
compile
| See |
reset
| This function clears all compilation caches and restores the system to its initial state. |
allow_in_graph
| Tells the compiler frontend (Dynamo) to skip symbolic introspection of the function and instead directly write it to the graph when encountered. |
substitute_in_graph
| Register a polyfill handler for a function, usually a C function from the C extension, to be used in place of the original function when inlining the original function in the graph. |
assume_constant_result
| This function is used to mark a function |
list_backends
| Return valid strings that can be passed to |
disable
| This function provides a decorator to disable compilation on a function. |
set_stance
| Set the current stance of the compiler. |
set_enable_guard_collectives
| Enables use of collectives during guard evaluation to synchronize behavior across ranks. |
cudagraph_mark_step_begin
| Indicates that a new iteration of inference or training is about to begin. |
is_compiling
| Indicates whether a graph is executed/traced as part of torch.compile() or torch.export(). |
is_dynamo_compiling
| Indicates whether a graph is traced via TorchDynamo. |
is_exporting
| Indicated whether we're under exporting. |
skip_guard_on_inbuilt_nn_modules_unsafe
| A common function to skip guards on the inbuilt nn modules like torch.nn.Linear. |
skip_guard_on_all_nn_modules_unsafe
| A common function to skip guards on all nn modules, both user defined as well inbuilt nn modules (like torch.nn.Linear). |
keep_tensor_guards_unsafe
| A common function to keep tensor guards on all tensors. |
skip_guard_on_globals_unsafe
| A common function to skip guards on all globals. |
nested_compile_region
| Tells ``torch.compile`` that the marked set of operations forms a nested compile region (which is often repeated in the full model) whose code can be compiled once and safely reused. |