You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CONTRIBUTING.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,10 +1,10 @@
1
-
# Contributing to OpenSceneFlow
1
+
# Contributing to [OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow)
2
2
3
3
We want to make contributing to this project as easy and transparent as possible. We welcome any contributions, from bug fixes to new features. If you're interested in adding your own scene flow method, this guide will walk you through the process.
4
4
5
5
## Adding a New Method
6
6
7
-
Here is a quick guide to integrating a new method into the OpenSceneFlow codebase.
7
+
Here is a quick guide to integrating a new method into the [OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow) codebase.
-**Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation**
20
30
*Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im*
@@ -50,21 +60,12 @@ Additionally, *OpenSceneFlow* integrates following excellent works: [ICLR'24 Zer
50
60
-[x][ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py).
51
61
-[x][NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance.
-[ ][EulerFlow](https://arxiv.org/abs/2410.02031): ICLR 2025. SSL optimization-based. In my plan, haven't coding yet.
53
64
54
65
</details>
55
66
56
67
💡: Want to learn how to add your own network in this structure? Check [Contribute section](CONTRIBUTING.md#adding-a-new-method) and know more about the code. Fee free to pull request and your bibtex [here](#cite-us).
57
68
58
-
---
59
-
60
-
<!-- 📜 Changelog:
61
-
62
-
- 🎁 2025/1/28 14:58: Update the codebase to collect all methods in one repository reference [Pointcept](https://github.com/Pointcept/Pointcept) repo.
63
-
- 🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, Personally I found `wget` from HuggingFace link is much faster than Zenodo.
64
-
- 2024/09/26 16:24: All codes already uploaded and tested. You can to try training directly by downloading (through [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow)/[Zenodo](https://zenodo.org/records/13744999)) demo data or pretrained weight for evaluation.
- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods. -->
67
-
68
69
## 0. Installation
69
70
70
71
There are two ways to install the codebase: directly on your [local machine](#environment-setup) or in a [Docker container](#docker-recommended-for-isolation).
@@ -75,7 +76,7 @@ We use conda to manage the environment, you can install it follow [here](assets/
@@ -98,23 +99,24 @@ cd /home/kin/workspace/OpenSceneFlow && git pull
98
99
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
99
100
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
100
101
cd /home/kin/workspace/OpenSceneFlow
101
-
mamba activate opensf
102
+
conda activate opensf
102
103
```
103
104
104
105
If you prefer to build the Docker image by yourself, Check [build-docker-image](assets/README.md#build-docker-image) section for more details.
105
106
106
107
## 1. Data Preparation
107
108
108
-
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, and **custom datasets** (more datasets will be added in the future).
109
+
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, **nuScenes**, **ZOD**and **custom datasets** (more datasets will be added in the future).
109
110
110
111
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process).
111
112
112
113
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)/[Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)).
Once extracted, you can directly use this dataset to run the [training script](#2-quick-start) without further processing.
@@ -129,35 +131,34 @@ Some tips before running the code:
129
131
And free yourself from trainning, you can download the pretrained weight from [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section).
130
132
131
133
```bash
132
-
mamba activate opensf
134
+
conda activate opensf
133
135
```
134
136
135
-
### Flow4D
137
+
### Supervised Training
138
+
139
+
#### Flow4D
136
140
137
141
Train Flow4D with the leaderboard submit config. [Runtime: Around 18 hours in 4x RTX 3090 GPUs.]
Train SeFlow needed to specify the loss function, we set the config of our best model in the leaderboard. [Runtime: Around 11 hours in 4x A100 GPUs.]
175
+
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size&lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
Extra pakcges needed for VoteFlow, [pytorch3d](https://pytorch3d.org/) (prefer 0.7.7) and [torch-scatter](https://github.com/rusty1s/pytorch_scatter?tab=readme-ov-file) (prefer 2.1.2):
187
202
188
-
ICP-Flow is a optimization-based method, you can directly run `eval.py`/`save.py` to get the result.
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size&lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
231
+
### Optimization-based Unsupervised Methods
232
+
233
+
For all optimization-based methods, you can directly run `eval.py`/`save.py` to get the result without training, while the running might take really long time, maybe tmux for run it.
@@ -243,13 +278,13 @@ In [SSF paper](https://arxiv.org/abs/2501.17821), we introduce a new distance-ba
243
278
244
279
245
280
### Submit result to public leaderboard
246
-
To submit your result to the public Leaderboard, if you select `av2_mode=test`, it should be a zip file for you to submit to the leaderboard.
281
+
To submit your result to the public Leaderboard, if you select `data_mode=test`, it should be a zip file for you to submit to the leaderboard.
247
282
Note: The leaderboard result in DeFlow&SeFlow main paper is [version 1](https://eval.ai/web/challenges/challenge-page/2010/evaluation), as [version 2](https://eval.ai/web/challenges/challenge-page/2210/overview) is updated after DeFlow&SeFlow.
248
283
249
284
```bash
250
285
# since the env may conflict we set new on deflow, we directly create new one:
251
-
mamba create -n py37 python=3.7
252
-
mamba activate py37
286
+
conda create -n py37 python=3.7
287
+
conda activate py37
253
288
pip install "evalai"
254
289
255
290
# Step 2: login in eval and register your team
@@ -346,6 +381,13 @@ And our excellent collaborators works contributed to this codebase also:
346
381
journal={arXiv preprint arXiv:2501.17821},
347
382
year={2025}
348
383
}
384
+
385
+
@inproceedings{lin2025voteflow,
386
+
title={VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow},
387
+
author={Lin, Yancong and Wang, Shiming and Nan, Liangliang and Kooij, Julian and Caesar, Holger},
388
+
booktitle={CVPR},
389
+
year={2025},
390
+
}
349
391
@article{lin2024icp,
350
392
title={ICP-Flow: LiDAR Scene Flow Estimation with ICP},
Solved by install the torch-cuda version: `pip install https://data.pyg.org/whl/torch-2.0.0%2Bcu118/torch_scatter-2.1.2%2Bpt20cu118-cp310-cp310-linux_x86_64.whl`
104
+
105
+
4. cuda package problem: `ValueError(f"Unknown CUDA arch ({arch}) or GPU not supported")`
106
+
Solved by [checking GPU compute](https://developer.nvidia.cn/cuda-gpus#compute) then manually assign: `export TORCH_CUDA_ARCH_LIST=8.6`
0 commit comments