Skip to content

Commit 561f1df

Browse files
committed
deploy: 8531f63
0 parents  commit 561f1df

47 files changed

Lines changed: 6864 additions & 0 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.buildinfo

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
# Sphinx build info version 1
2+
# This file records the configuration used when building these files. When it is not found, a full rebuild will be done.
3+
config: 4df352b8564d758084f9689c2e625323
4+
tags: 645f666f9bcd5a90fca523b33c5a78b7

.nojekyll

Whitespace-only changes.

_sources/index.rst.txt

Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
.. rf_diffusion documentation master file, created by
2+
sphinx-quickstart on Sat May 25 18:21:23 2024.
3+
You can adapt this file completely to your liking, but it should at least
4+
contain the root `toctree` directive.
5+
6+
Welcome to the Official Documentation for `RFdiffusion2 <https://github.com/RosettaCommons/RFdiffusion2>`_!
7+
===========================================================================================================
8+
.. mdinclude:: overview.md
9+
10+
.. toctree::
11+
:maxdepth: 1
12+
:caption: Contents:
13+
14+
Overview <self>
15+
readme_link.rst
16+
license_link.rst
17+
installation.md
18+
usage/usage.md
19+
usage/configuration_options.md
20+
usage/ori_tokens.md
21+
22+
.. toctree::
23+
:maxdepth: 1
24+
:hidden:
25+
26+
.. usage/other_pipeline_example.md
27+
.. usage/run_inference_example.md
28+
29+
.. Indices and tables
30+
.. ==================
31+
..
32+
.. * :ref:`genindex`
33+
.. * :ref:`modindex`
34+
.. * :ref:`search`
35+
..
36+
.. .. include:: new.rst
37+
.. .. include:: modules.rst

_sources/installation.md.txt

Lines changed: 214 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,214 @@
1+
# Installation Guide
2+
3+
## Apptainer Image (Recommended)
4+
There is an Apptainer image provided in the RFdiffusion2 repository, it is located at `RFdiffusion2/rf_diffusion/exec/bakerlab_rf_diffusion_aa.sif`. This file can be run with either Apptainer or Singularity, if you have any issues using it please [create an issue](https://github.com/RosettaCommons/RFdiffusion2/issues). An example of how to use this image is given in the [README](readme_link.html#inference).
5+
6+
If you need to generate your own image, the `.spec` file used to generate the given `.sif` file can be found at `RFdiffusion2/rf_diffusion/exec/rf_diffusion_aa.spec`.
7+
8+
### Troubleshooting
9+
<a id="image_troubleshooting"></a>
10+
11+
<details>
12+
<summary>lz4 compression issues</summary>
13+
14+
Full error message you might see:
15+
```
16+
FATAL: container creation failed: mount hook function failure: mount /proc/self/fd/3->/var/apptainer/mnt/session/rootfs error: while mounting image /proc/self/fd/3: squashfuse_ll exited with status 255: Squashfs image uses lz4 compression, this version supports only zlib.
17+
```
18+
Or you may see
19+
```
20+
FATAL: kernel reported a bad superblock for squashfs image partition,possible causes are that your kernel doesn't support the compression algorithm or the image is corrupted.
21+
```
22+
23+
To fix this issue you can rebuild the sif on your HPC cluster:
24+
```
25+
apptainer build --sandbox rfd2_sandbox /path/to/bakerlab_rf_diffusion_aa.sif
26+
apptainer build rfd2_zlib.sif rfd2_sandbox
27+
```
28+
Thank you to those who posted in [Issue 10](https://github.com/RosettaCommons/RFdiffusion2/issues/10) for reporting this problem and documenting a
29+
solution.
30+
</details>
31+
32+
33+
## Creating Your Own Environment
34+
You do not need to install RFdiffusion2 itself, but you do need to install several dependencies to be able to use the Python scripts that will run the inference calculations.
35+
This is what the Apptainer image above supplies, an environment where the dependencies required by RFdiffusion2 have already been installed.
36+
If this container works on your computing system, we highly recommend using it.
37+
38+
However, if you need to set up your own environment, the instructions below should help you determine the dependency versions you need to get RFdiffusion2 running on your system.
39+
40+
### Using Provided Environment Files
41+
We have created a few environment files to automatically generate a conda environment that will allow RFdiffusion2 to run.
42+
> Note: Due to variations in GPU types and drivers, we are not able to guarantee that any of the provided environment files successfully install all the required dependencies. See the section below if none of the provided environment files are appropriate for your computing system.
43+
44+
You can find the prepared environment files in the `envs` directory
45+
- `cuda121_env.yml` - This is appropriate for systems able to run CUDA 12.1 and PyTorch 2.4.0
46+
- This uses requirements_cuda121.txt to install dependencies via `pip`
47+
- `cuda124_env.yml` - This is appropriate for systems able to run CUDA 12.4 and PyTorch 2.4.0
48+
- This uses requirements_cuda124.txt to install dependencies via `pip`
49+
50+
If you have trouble with these files but they *should* work based on your system specifications here are a few things to try:
51+
1. Separate the creation of the environment and the installation of dependencies via pip:
52+
1. Remove the last two lines from the above `.yml` files
53+
2.
54+
```
55+
conda env create -f cuda121_env.yml
56+
conda activate rfd2_env
57+
pip install -r requirements_cuda121.txt
58+
```
59+
This will force the dependencies you want installed by CUDA to be installed before pip is used.
60+
2. Check to make sure the python that is being referenced is the one from your conda environment once it is activated. On clusters different modules you have imported might overrule the python in your conda environment. You can either manually give the path to your Python or change your system settings or environment variables to prefer the environment's python installation.
61+
3. You can try to install any dependencies that pip hangs on using CUDA instead of pip.
62+
If you have created an environment file that runs RFdiffusion for a different CUDA version or other dependency versions, create a PR to add it to the `envs` directory.
63+
64+
### Creating the Environment Manually
65+
Some of the dependencies listed below will vary based on your system, especially the version of CUDA available on your cluster.
66+
You will likely need to change some of the versions of the tools below to successfully install RFdiffusion2.
67+
The instructions below are for CUDA 12.4 and PyTorch 2.4.
68+
For some useful troubleshooting tips, see the [Troubleshooting](#install_troubleshooting) section below.
69+
70+
1. Create a conda environment using [miniforge](https://github.com/conda-forge/miniforge) and activate it
71+
1. Point to the correct [NVIDIA-CUDA channel](https://anaconda.org/nvidia/cuda/labels), and install [PyTorch](https://pytorch.org/), Python 3.11, and [pip](https://pip.pypa.io/en/latest/) based on what is available on your system:
72+
```
73+
conda install --yes \
74+
-c nvidia/label/cuda-12.4.0 \
75+
-c https://conda.rosettacommons.org \
76+
-c pytorch \
77+
-c dglteam/label/th24_cu124 \
78+
python==3.11 \
79+
pip \
80+
numpy"<2" \
81+
matplotlib \
82+
jupyterlab \
83+
conda-forge::openbabel==3.1.1 \
84+
cuda \
85+
pytorch==2.4 \
86+
pytorch-cuda==12.4 \
87+
pyrosetta
88+
```
89+
> **REMEMBER:** You will need to change your CUDA version based on what is available on your system. This will need to be changed in the
90+
> NVIDIA channel, the dglteam channel, the pytorch version, and the pytorch-cuda version.
91+
92+
1. Use pip to install several Python libraries:
93+
```
94+
pip install \
95+
hydra-core==1.3.1 \
96+
ml-collections==0.1.1 \
97+
addict==2.4.0 \
98+
assertpy==1.1.0 \
99+
biopython==1.83 \
100+
colorlog \
101+
compact-json \
102+
cython==3.0.0 \
103+
cytoolz==0.12.3 \
104+
debugpy==1.8.5 \
105+
deepdiff==6.3.0 \
106+
dm-tree==0.1.8 \
107+
e3nn==0.5.1 \
108+
einops==0.7.0 \
109+
executing==2.0.0 \
110+
fastparquet==2024.5.0 \
111+
fire==0.6.0 \
112+
GPUtil==1.4.0 \
113+
icecream==2.1.3 \
114+
ipdb==0.13.11 \
115+
ipykernel==6.29.5 \
116+
ipython==8.27.0 \
117+
ipywidgets \
118+
mdtraj==1.10.0 \
119+
numba \
120+
omegaconf==2.3.0 \
121+
opt_einsum==3.3.0 \
122+
pandas==1.5.0 \
123+
plotly==5.16.1 \
124+
pre-commit==3.7.1 \
125+
py3Dmol==2.2.1 \
126+
pyarrow==17.0.0 \
127+
pydantic \
128+
pyrsistent==0.19.3 \
129+
pytest-benchmark \
130+
pytest-cov==4.1.0 \
131+
pytest-dotenv==0.5.2 \
132+
pytest==8.2.0 \
133+
rdkit==2024.3.5 \
134+
RestrictedPython \
135+
ruff==0.6.2 \
136+
scipy==1.13.1 \
137+
seaborn==0.13.2 \
138+
submitit \
139+
sympy==1.13.2 \
140+
tmtools \
141+
tqdm==4.65.0 \
142+
typer==0.12.5 \
143+
wandb==0.13.10
144+
```
145+
1. Install [Biotite](https://www.biotite-python.org/latest/index.html) and several libraries related to PyTorch, and [pylibcugraphops](https://pypi.org/project/pylibcugraphops-cu12/):
146+
```
147+
pip install biotite
148+
pip install pyg_lib torch_scatter torch_sparse torch_cluster torch_spline_conv -f https://data.pyg.org/whl/torch-2.4.0+cu124.html
149+
pip install -U -i https://pypi.anaconda.org/rapidsai-wheels-nightly/simple "pylibcugraphops-cu12>=24.6.0a24"
150+
```
151+
> **REMEMBER:** You will need to change the link for installing the PyTorch-related libraries (the second line in the code block above) to have it match your PyTorch and CUDA versions.
152+
1. Install a version of [TorchData](https://pypi.org/project/torchdata/#what-is-torchdata) that still has DataPipes:
153+
```
154+
pip install torchdata==0.9.0
155+
```
156+
1. Install a version of the [Deep Graph Library](https://www.dgl.ai/pages/start.html) based on the version of PyTorch and CUDA you are using:
157+
```
158+
conda install -c dglteam/label/th24_cu124 dgl
159+
```
160+
> **REMEMBER:** You will need to change the conda channel to the correct version of PyTorch (`th24` in the line above) and CUDA (`cu124` in the line above). Use the [Deep Graph Library's Installation guide](https://www.dgl.ai/pages/start.html) to determine the correct conda or pip command.
161+
1. Set your `PYTHONPATH` environment variable:
162+
```
163+
export PYTHONPATH=$PYTHONPATH:/path/to/RFdiffusion2
164+
```
165+
166+
You can add this to your environment via
167+
```
168+
conda env config vars set PYTHONPATH=$PYTHONPATH:/path/to/RFdiffusion2
169+
```
170+
so that you do not need to set it every time.
171+
172+
.. _troubleshooting:
173+
174+
### Troubleshooting
175+
<a id="install_troubleshooting"></a>
176+
Ran into an installation issue not covered here? [Create a new issue!](https://github.com/RosettaCommons/RFdiffusion2/issues)
177+
178+
179+
<details>
180+
<summary>How to determine the highest available CUDA version on your system</summary>
181+
182+
The `nvidia-smi` command will print out information about the available GPUs you can access on your cluster.
183+
The first line in the result will look something like:
184+
```
185+
+---------------------------------------------------------------------------------------+
186+
| NVIDIA-SMI 535.230.02 Driver Version: 535.230.02 CUDA Version: 12.2 |
187+
|-----------------------------------------+----------------------+----------------------+
188+
```
189+
Here, this means that this system can only support up to CUDA 12.2. However, if you look at the possible [PyTorch versions](https://pytorch.org/get-started/previous-versions/)
190+
and [Deep Graph Library versions](https://www.dgl.ai/pages/start.html) on their installation pages, you'll notice that they don't
191+
have versions for 12.2, so in this situation you would need to change the installation instructions to work with CUDA 12.1.
192+
</details>
193+
194+
<details>
195+
<summary>Cannot find DGL C++ graphbolt library at...</summary>
196+
197+
Seeing this error likely means that the version of the Deep Graph Library (DGL) that you have installed does not match
198+
the corresponding version of PyTorch your system is finding. Double check that you installed the correct versions of
199+
these tools and ensure that your system does not have a different version of PyTorch it is finding.
200+
201+
It might also be useful to `ls` in the given directory to see what version of the DGL libraries you have installed.
202+
For example, if your error says it is looking for `graphbolt/libgraphbolt_pytorch_2.4.0.so` it means your system is
203+
using Pytorch version 2.4.0. Meanwhile if you `ls` in the directory you might see that you only have `libgraphbolt_pytorch_2.1.2.so`
204+
meaning that the version of DGL you downloaded was only mean to work with PyTorch versions up to 2.1.2.
205+
</details>
206+
207+
<details>
208+
<summary>No module named 'torchdata.datapipes'</summary>
209+
210+
Newer versions of TorchData have stopped supporting their DataPipes tools. You will need to downgrade the version of TorchData
211+
you have installed to one at or below version 0.9.0. You can learn more about this change on [TorchData's PyPI page](https://pypi.org/project/torchdata/).
212+
</details>
213+
214+

_sources/license_link.rst.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
LICENSE
2+
#######
3+
.. mdinclude:: ../../LICENSE.md

_sources/readme_link.rst.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
README
2+
######
3+
.. mdinclude:: ../../README.md
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
## Configuration Options

_sources/usage/ori_tokens.md.txt

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
## ORI Tokens
2+
3+
ORI Tokens

_sources/usage/usage.rst.txt

Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
Usage Examples
2+
==============
3+
4+
The usage examples linked below demonstrate how to use the various provided scripts to run inference with RFdiffusion2.
5+
6+
Before using RFdiffusion2 for your own designs make sure you are able to run the demo included in the [README](../readme_link.html#inference).
7+
8+
How to run RFdiffusion2
9+
-----------------------
10+
For those of you who are familiar with running the `original RFdiffusion <https://github.com/RosettaCommons/RFdiffusion>`_, running RFdiffusion2 is very similar.
11+
The main differences are:
12+
13+
* The inference script no longer chooses the best model weights for you to use, there is one recommended model weight file located at ``RFdiffusion2/rf_diffusion/model_weights/RFD_173.pt``. This is the set of weights used in the demo in the README.
14+
15+
* RFdiffusion2 can now take atomic inputs, not just backbone-level information
16+
17+
* An ORI token is expected in the input PDB, which specifies the center of mass for the design region. For more information on ORI tokens see
18+
the documentation on :doc:`ORI tokens <ori_tokens>`.
19+
20+
21+
More information
22+
^^^^^^^^^^^^^^^^
23+
* :doc:`ORI Tokens <ori_tokens>` - Explanation of ORI tokens and how to use them in your designs.
24+
* :doc:`Configuration Options <configuration_options>` - Explanation of the various configuration options available for RFdiffusion2.
25+
26+
Examples
27+
^^^^^^^^
28+
29+
- :doc:`Using the pipelines.py script <other_pipeline_example>`
30+
- :doc:`Using the run_inference.py script <run_inference_example>`

0 commit comments

Comments
 (0)