Skip to content

Commit d293b7a

Browse files
authored
[Docs] Align documentation with tirx/s_tir namespace split (#18953)
Since we've split the old `tir` namespace into `tirx` (core IR / lowering) and `s_tir` (schedule primitives / auto-tuning), some outdated documentation need to be updated. The global rename still leaves a few concept-level references using "tirx" in prose (for example, "Relax and tirx programs"). Since "tirx" now refers only to one part of the old TensorIR stack, these higher-level references should use "TensorIR" instead, so they correctly cover both `tirx` and `s_tir`. In this PR, we - Add tirx / s_tir module descriptions to `docs/deep_dive/tensor_ir/index.rst` and `docs/arch/index.rst` (new `tvm/s_tir` section, updated `tvm/tirx` section). - Fix concept-level prose in `docs/arch/index.rst` and `docs/arch/pass_infra.rst` to use "TensorIR" instead of "tirx" where referring to the concept rather than the namespace. - Fix `docs/arch/runtimes/vulkan.rst` to use "TIR" instead of "tirx" in debug shader description. - Correct `tvm/dlight` → `tvm/s_tir/dlight` section path and "tirx schedules" → "s_tir schedules" in `docs/arch/index.rst`. - Revert unintended label changes in `abstraction.rst` and `tir_creation.py` (labels kept as `_tir-abstraction`, `_tir-creation`). - Revert unintended title change in `tir_transformation.py` (kept as "Transformation"). - Revert `exclude-members` change in `tirx/tirx.rst` (kept original list).
1 parent c79caf0 commit d293b7a

4 files changed

Lines changed: 52 additions & 28 deletions

File tree

docs/arch/index.rst

Lines changed: 33 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -83,14 +83,14 @@ relax transformations
8383
relax transformations contain a collection of passes that apply to relax functions. The optimizations include common graph-level
8484
optimizations such as constant folding and dead-code elimination for operators, and backend-specific optimizations such as library dispatch.
8585

86-
tirx transformations
87-
^^^^^^^^^^^^^^^^^^^^
86+
TensorIR transformations
87+
^^^^^^^^^^^^^^^^^^^^^^^^
8888

8989
- **TensorIR schedule**: TensorIR schedules are designed to optimize the TensorIR functions for a specific target, with user-guided instructions and control how the target code is generated.
90-
For CPU targets, tirx PrimFunc can generate valid code and execute on the target device without schedule but with very-low performance. However, for GPU targets, the schedule is essential
90+
For CPU targets, a TensorIR PrimFunc can generate valid code and execute on the target device without schedule but with very-low performance. However, for GPU targets, the schedule is essential
9191
for generating valid code with thread bindings. For more details, please refer to the :ref:`TensorIR Transformation <tirx-transform>` section. Additionally, we provides ``MetaSchedule`` to
9292
automate the search of TensorIR schedule.
93-
- **Lowering Passes**: These passes usually perform after the schedule is applied, transforming a tirx PrimFunc into another functionally equivalent PrimFunc, but closer to the
93+
- **Lowering Passes**: These passes usually perform after the schedule is applied, transforming a TensorIR PrimFunc into another functionally equivalent PrimFunc, but closer to the
9494
target-specific representation. For example, there are passes to flatten multi-dimensional access to one-dimensional pointer access, to expand the intrinsics into target-specific ones,
9595
and to decorate the function entry to meet the runtime calling convention.
9696

@@ -101,12 +101,12 @@ focus on optimizations that are not covered by them.
101101

102102
cross-level transformations
103103
^^^^^^^^^^^^^^^^^^^^^^^^^^^
104-
Apache TVM enables cross-level optimization of end-to-end models. As the IRModule includes both relax and tirx functions, the cross-level transformations are designed to mutate
104+
Apache TVM enables cross-level optimization of end-to-end models. As the IRModule includes both Relax and TensorIR functions, the cross-level transformations are designed to mutate
105105
the IRModule by applying different transformations to these two types of functions.
106106

107-
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, adding corresponding tirx PrimFunc into the IRModule, and replacing the relax operators
108-
with calls to the lowered tirx PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuses multiple consecutive tensor operations
109-
into one. Different from the previous implementations, relax fusion pipeline analyzes the pattern of tirx functions and detects the best fusion rules automatically rather
107+
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, adding corresponding TensorIR PrimFunc into the IRModule, and replacing the relax operators
108+
with calls to the lowered TensorIR PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuses multiple consecutive tensor operations
109+
into one. Different from the previous implementations, relax fusion pipeline analyzes the pattern of TensorIR functions and detects the best fusion rules automatically rather
110110
than human-defined operator fusion patterns.
111111

112112
Target Translation
@@ -306,22 +306,36 @@ in the IRModule. Please refer to the :ref:`Relax Deep Dive <relax-deep-dive>` fo
306306
tvm/tirx
307307
--------
308308

309-
tirx contains the definition of the low-level program representations. We use ``tirx::PrimFunc`` to represent functions that can be transformed by tirx passes.
310-
Besides the IR data structures, the tirx module also includes:
309+
``tirx`` contains the core IR definitions and lowering infrastructure
310+
for TensorIR (split from the former ``tir`` module). ``tirx::PrimFunc``
311+
represents low-level tensor functions that can be transformed by tirx passes.
311312

312-
- A set of analysis passes to analyze the tirx functions in ``tirx/analysis``.
313-
- A set of transformation passes to lower or optimize the tirx functions in ``tirx/transform``.
313+
The tirx module includes:
314314

315-
The schedule primitives and tensor intrinsics are in ``s_tir/schedule`` and ``s_tir/tensor_intrin`` respectively.
315+
- IR data structures (PrimFunc, Buffer, SBlock, expressions, statements).
316+
- Analysis passes in ``tirx/analysis``.
317+
- Transformation and lowering passes in ``tirx/transform``.
318+
319+
tvm/s_tir
320+
---------
321+
322+
``s_tir`` (Schedulable TIR, split from the former ``tir`` module) contains
323+
schedule primitives and auto-tuning tools that operate on ``tirx::PrimFunc``:
324+
325+
- Schedule primitives to control code generation (tiling, vectorization, thread
326+
binding) in ``s_tir/schedule``.
327+
- Builtin tensor intrinsics in ``s_tir/tensor_intrin``.
328+
- MetaSchedule for automated performance tuning.
329+
- DLight for pre-defined, high-performance schedules.
316330

317331
Please refer to the :ref:`TensorIR Deep Dive <tensor-ir-deep-dive>` for more details.
318332

319333
tvm/arith
320334
---------
321335

322-
This module is closely tied to tirx. One of the key problems in the low-level code generation is the analysis of the indices'
336+
This module is closely tied to TensorIR. One of the key problems in the low-level code generation is the analysis of the indices'
323337
arithmetic properties — the positiveness, variable bound, and the integer set that describes the iterator space. arith module provides
324-
a collection of tools that do (primarily integer) analysis. A tirx pass can use these analyses to simplify and optimize the code.
338+
a collection of tools that do (primarily integer) analysis. A TensorIR pass can use these analyses to simplify and optimize the code.
325339

326340
tvm/te and tvm/topi
327341
-------------------
@@ -330,7 +344,7 @@ TE stands for Tensor Expression. TE is a domain-specific language (DSL) for desc
330344
itself is not a self-contained function that can be stored into IRModule. We can use ``te.create_prim_func`` to convert a tensor expression to a ``tirx::PrimFunc``
331345
and then integrate it into the IRModule.
332346

333-
While possible to construct operators directly via tirx or tensor expressions (TE) for each use case, it is tedious to do so.
347+
While possible to construct operators directly via TensorIR or tensor expressions (TE) for each use case, it is tedious to do so.
334348
`topi` (Tensor operator inventory) provides a set of pre-defined operators defined by numpy and found in common deep learning workloads.
335349

336350
tvm/s_tir/meta_schedule
@@ -339,10 +353,10 @@ tvm/s_tir/meta_schedule
339353
MetaSchedule is a system for automated search-based program optimization,
340354
and can be used to optimize TensorIR schedules. Note that MetaSchedule only works with static-shape workloads.
341355

342-
tvm/dlight
343-
----------
356+
tvm/s_tir/dlight
357+
----------------
344358

345-
DLight is a set of pre-defined, easy-to-use, and performant tirx schedules. DLight aims:
359+
DLight is a set of pre-defined, easy-to-use, and performant s_tir schedules. DLight aims:
346360

347361
- Fully support **dynamic shape workloads**.
348362
- **Light weight**. DLight schedules provides tuning-free schedule with reasonable performance.

docs/arch/pass_infra.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@ transformation using the analysis result collected during and/or before traversa
3131
However, as TVM evolves quickly, the need for a more systematic and efficient
3232
way to manage these passes is becoming apparent. In addition, a generic
3333
framework that manages the passes across different layers of the TVM stack (e.g.
34-
Relax and tirx) paves the way for developers to quickly prototype and plug the
34+
Relax and TensorIR) paves the way for developers to quickly prototype and plug the
3535
implemented passes into the system.
3636

3737
This doc describes the design of such an infra that takes the advantage of the
@@ -166,7 +166,7 @@ Pass Constructs
166166
^^^^^^^^^^^^^^^
167167

168168
The pass infra is designed in a hierarchical manner, and it could work at
169-
different granularities of Relax/tirx programs. A pure virtual class ``PassNode`` is
169+
different granularities of Relax/TensorIR programs. A pure virtual class ``PassNode`` is
170170
introduced to serve as the base of the different optimization passes. This class
171171
contains several virtual methods that must be implemented by the
172172
subclasses at the level of modules, functions, or sequences of passes.
@@ -222,13 +222,13 @@ Function-Level Passes
222222
^^^^^^^^^^^^^^^^^^^^^
223223

224224
Function-level passes are used to implement various intra-function level
225-
optimizations for a given Relax/tirx module. It fetches one function at a time from
225+
optimizations for a given Relax/TensorIR module. It fetches one function at a time from
226226
the function list of a module for optimization and yields a rewritten Relax
227-
``Function`` or tirx ``PrimFunc``. Most of passes can be classified into this category, such as
227+
``Function`` or TensorIR ``PrimFunc``. Most of passes can be classified into this category, such as
228228
common subexpression elimination and inference simplification in Relax as well as vectorization
229-
and flattening storage in tirx, etc.
229+
and flattening storage in TensorIR, etc.
230230

231-
Note that the scope of passes at this level is either a Relax function or a tirx primitive function.
231+
Note that the scope of passes at this level is either a Relax function or a TensorIR primitive function.
232232
Therefore, we cannot add or delete a function through these passes as they are not aware of
233233
the global information.
234234

docs/arch/runtimes/vulkan.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -254,6 +254,6 @@ string are all false boolean flags.
254254
validated with `spvValidate`_.
255255

256256
* ``TVM_VULKAN_DEBUG_SHADER_SAVEPATH`` - A path to a directory. If
257-
set to a non-empty string, the Vulkan codegen will save tirx, binary
257+
set to a non-empty string, the Vulkan codegen will save TIR, binary
258258
SPIR-V, and disassembled SPIR-V shaders to this directory, to be
259259
used for debugging purposes.

docs/deep_dive/tensor_ir/index.rst

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,18 @@
1919

2020
TensorIR
2121
========
22-
TensorIR is one of the core abstraction in Apache TVM stack, which is used to
23-
represent and optimize the primitive tensor functions.
22+
TensorIR is one of the core abstractions in the Apache TVM stack, used to
23+
represent and optimize primitive tensor functions.
24+
25+
The TensorIR codebase consists of two modules (split from the former ``tir``):
26+
27+
- **tirx** — Core IR definitions and lowering (PrimFunc, Buffer, SBlock,
28+
expressions, statements, lowering passes).
29+
- **s_tir** (Schedulable TIR) — Schedule primitives, MetaSchedule, DLight,
30+
and tensor intrinsics.
31+
32+
In TVMScript, both modules are accessed via
33+
``from tvm.script import tirx as T``.
2434

2535
.. toctree::
2636
:maxdepth: 2

0 commit comments

Comments
 (0)