You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Docs] Align documentation with tirx/s_tir namespace split (#18953)
Since we've split the old `tir` namespace into `tirx` (core IR /
lowering) and `s_tir` (schedule primitives / auto-tuning), some outdated
documentation need to be updated. The global rename still leaves a few
concept-level references using "tirx" in prose (for example, "Relax and
tirx programs"). Since "tirx" now refers only to one part of the old
TensorIR stack, these higher-level references should use "TensorIR"
instead, so they correctly cover both `tirx` and `s_tir`.
In this PR, we
- Add tirx / s_tir module descriptions to
`docs/deep_dive/tensor_ir/index.rst` and `docs/arch/index.rst` (new
`tvm/s_tir` section, updated `tvm/tirx` section).
- Fix concept-level prose in `docs/arch/index.rst` and
`docs/arch/pass_infra.rst` to use "TensorIR" instead of "tirx" where
referring to the concept rather than the namespace.
- Fix `docs/arch/runtimes/vulkan.rst` to use "TIR" instead of "tirx" in
debug shader description.
- Correct `tvm/dlight` → `tvm/s_tir/dlight` section path and "tirx
schedules" → "s_tir schedules" in `docs/arch/index.rst`.
- Revert unintended label changes in `abstraction.rst` and
`tir_creation.py` (labels kept as `_tir-abstraction`, `_tir-creation`).
- Revert unintended title change in `tir_transformation.py` (kept as
"Transformation").
- Revert `exclude-members` change in `tirx/tirx.rst` (kept original
list).
Copy file name to clipboardExpand all lines: docs/arch/index.rst
+33-19Lines changed: 33 additions & 19 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -83,14 +83,14 @@ relax transformations
83
83
relax transformations contain a collection of passes that apply to relax functions. The optimizations include common graph-level
84
84
optimizations such as constant folding and dead-code elimination for operators, and backend-specific optimizations such as library dispatch.
85
85
86
-
tirx transformations
87
-
^^^^^^^^^^^^^^^^^^^^
86
+
TensorIR transformations
87
+
^^^^^^^^^^^^^^^^^^^^^^^^
88
88
89
89
- **TensorIR schedule**: TensorIR schedules are designed to optimize the TensorIR functions for a specific target, with user-guided instructions and control how the target code is generated.
90
-
For CPU targets, tirx PrimFunc can generate valid code and execute on the target device without schedule but with very-low performance. However, for GPU targets, the schedule is essential
90
+
For CPU targets, a TensorIR PrimFunc can generate valid code and execute on the target device without schedule but with very-low performance. However, for GPU targets, the schedule is essential
91
91
for generating valid code with thread bindings. For more details, please refer to the :ref:`TensorIR Transformation <tirx-transform>` section. Additionally, we provides ``MetaSchedule`` to
92
92
automate the search of TensorIR schedule.
93
-
- **Lowering Passes**: These passes usually perform after the schedule is applied, transforming a tirx PrimFunc into another functionally equivalent PrimFunc, but closer to the
93
+
- **Lowering Passes**: These passes usually perform after the schedule is applied, transforming a TensorIR PrimFunc into another functionally equivalent PrimFunc, but closer to the
94
94
target-specific representation. For example, there are passes to flatten multi-dimensional access to one-dimensional pointer access, to expand the intrinsics into target-specific ones,
95
95
and to decorate the function entry to meet the runtime calling convention.
96
96
@@ -101,12 +101,12 @@ focus on optimizations that are not covered by them.
101
101
102
102
cross-level transformations
103
103
^^^^^^^^^^^^^^^^^^^^^^^^^^^
104
-
Apache TVM enables cross-level optimization of end-to-end models. As the IRModule includes both relax and tirx functions, the cross-level transformations are designed to mutate
104
+
Apache TVM enables cross-level optimization of end-to-end models. As the IRModule includes both Relax and TensorIR functions, the cross-level transformations are designed to mutate
105
105
the IRModule by applying different transformations to these two types of functions.
106
106
107
-
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, adding corresponding tirx PrimFunc into the IRModule, and replacing the relax operators
108
-
with calls to the lowered tirx PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuses multiple consecutive tensor operations
109
-
into one. Different from the previous implementations, relax fusion pipeline analyzes the pattern of tirx functions and detects the best fusion rules automatically rather
107
+
For example, ``relax.LegalizeOps`` pass mutates the IRModule by lowering relax operators, adding corresponding TensorIR PrimFunc into the IRModule, and replacing the relax operators
108
+
with calls to the lowered TensorIR PrimFunc. Another example is operator fusion pipeline in relax (including ``relax.FuseOps`` and ``relax.FuseTIR``), which fuses multiple consecutive tensor operations
109
+
into one. Different from the previous implementations, relax fusion pipeline analyzes the pattern of TensorIR functions and detects the best fusion rules automatically rather
110
110
than human-defined operator fusion patterns.
111
111
112
112
Target Translation
@@ -306,22 +306,36 @@ in the IRModule. Please refer to the :ref:`Relax Deep Dive <relax-deep-dive>` fo
306
306
tvm/tirx
307
307
--------
308
308
309
-
tirx contains the definition of the low-level program representations. We use ``tirx::PrimFunc`` to represent functions that can be transformed by tirx passes.
310
-
Besides the IR data structures, the tirx module also includes:
309
+
``tirx`` contains the core IR definitions and lowering infrastructure
310
+
for TensorIR (split from the former ``tir`` module). ``tirx::PrimFunc``
311
+
represents low-level tensor functions that can be transformed by tirx passes.
311
312
312
-
- A set of analysis passes to analyze the tirx functions in ``tirx/analysis``.
313
-
- A set of transformation passes to lower or optimize the tirx functions in ``tirx/transform``.
313
+
The tirx module includes:
314
314
315
-
The schedule primitives and tensor intrinsics are in ``s_tir/schedule`` and ``s_tir/tensor_intrin`` respectively.
315
+
- IR data structures (PrimFunc, Buffer, SBlock, expressions, statements).
316
+
- Analysis passes in ``tirx/analysis``.
317
+
- Transformation and lowering passes in ``tirx/transform``.
318
+
319
+
tvm/s_tir
320
+
---------
321
+
322
+
``s_tir`` (Schedulable TIR, split from the former ``tir`` module) contains
323
+
schedule primitives and auto-tuning tools that operate on ``tirx::PrimFunc``:
324
+
325
+
- Schedule primitives to control code generation (tiling, vectorization, thread
326
+
binding) in ``s_tir/schedule``.
327
+
- Builtin tensor intrinsics in ``s_tir/tensor_intrin``.
328
+
- MetaSchedule for automated performance tuning.
329
+
- DLight for pre-defined, high-performance schedules.
316
330
317
331
Please refer to the :ref:`TensorIR Deep Dive <tensor-ir-deep-dive>` for more details.
318
332
319
333
tvm/arith
320
334
---------
321
335
322
-
This module is closely tied to tirx. One of the key problems in the low-level code generation is the analysis of the indices'
336
+
This module is closely tied to TensorIR. One of the key problems in the low-level code generation is the analysis of the indices'
323
337
arithmetic properties — the positiveness, variable bound, and the integer set that describes the iterator space. arith module provides
324
-
a collection of tools that do (primarily integer) analysis. A tirx pass can use these analyses to simplify and optimize the code.
338
+
a collection of tools that do (primarily integer) analysis. A TensorIR pass can use these analyses to simplify and optimize the code.
325
339
326
340
tvm/te and tvm/topi
327
341
-------------------
@@ -330,7 +344,7 @@ TE stands for Tensor Expression. TE is a domain-specific language (DSL) for desc
330
344
itself is not a self-contained function that can be stored into IRModule. We can use ``te.create_prim_func`` to convert a tensor expression to a ``tirx::PrimFunc``
331
345
and then integrate it into the IRModule.
332
346
333
-
While possible to construct operators directly via tirx or tensor expressions (TE) for each use case, it is tedious to do so.
347
+
While possible to construct operators directly via TensorIR or tensor expressions (TE) for each use case, it is tedious to do so.
334
348
`topi` (Tensor operator inventory) provides a set of pre-defined operators defined by numpy and found in common deep learning workloads.
335
349
336
350
tvm/s_tir/meta_schedule
@@ -339,10 +353,10 @@ tvm/s_tir/meta_schedule
339
353
MetaSchedule is a system for automated search-based program optimization,
340
354
and can be used to optimize TensorIR schedules. Note that MetaSchedule only works with static-shape workloads.
341
355
342
-
tvm/dlight
343
-
----------
356
+
tvm/s_tir/dlight
357
+
----------------
344
358
345
-
DLight is a set of pre-defined, easy-to-use, and performant tirx schedules. DLight aims:
359
+
DLight is a set of pre-defined, easy-to-use, and performant s_tir schedules. DLight aims:
0 commit comments