Skip to content

Commit 04c456f

Browse files
committed
change PyTorch version in README
Signed-off-by: Sung Hyun Cho <hope5487@gmail.com>
1 parent bb245b2 commit 04c456f

2 files changed

Lines changed: 2 additions & 2 deletions

File tree

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -114,7 +114,7 @@ dynamically link them at runtime.
114114

115115
## Requirements
116116
* [PyTorch](https://pytorch.org/) must be installed _before_ installing DeepSpeed.
117-
* For full feature support we recommend a version of PyTorch that is >= 1.9 and ideally the latest PyTorch stable release.
117+
* For full feature support we recommend a version of PyTorch that is >= 2.0 and ideally the latest PyTorch stable release.
118118
* A CUDA or ROCm compiler such as [nvcc](https://docs.nvidia.com/cuda/cuda-compiler-driver-nvcc/#introduction) or [hipcc](https://github.com/ROCm-Developer-Tools/HIPCC) used to compile C++/CUDA/HIP extensions.
119119
* Specific GPUs we develop and test against are listed below, this doesn't mean your GPU will not work if it doesn't fall into this category it's just DeepSpeed is most well tested on the following:
120120
* NVIDIA: Pascal, Volta, Ampere, and Hopper architectures

tests/unit/v1/zero/test_zero_functorch_linear.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
ZeRO Stage 3 uses ``LinearFunctionForZeroStage3`` (via ``zero3_linear_wrap``) as
88
the memory-efficient linear path. After ``deepspeed.initialize``, global
99
``torch.nn.functional.linear`` is often the built-in again, so tests call
10-
``zero3_linear_wrap`` directly—the same ``autograd.Function`` as when the patch
10+
``zero3_linear_wrap`` directly-the same ``autograd.Function`` as when the patch
1111
is active. Legacy ``forward(ctx, ...)`` + ``ctx.save_for_backward`` in forward
1212
raises on strict functorch builds::
1313

0 commit comments

Comments
 (0)