Skip to content

Commit 5046a00

Browse files
committed
Merge branch 'main' into feature/icpflow
2 parents 345db96 + 7fb4973 commit 5046a00

67 files changed

Lines changed: 3497 additions & 660 deletions

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

.gitignore

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,7 @@ cancel.sh
2525
# cuda build files
2626
*.egg-info
2727
build*
28-
dist*
28+
dist*
29+
30+
data
31+
*__pycache__

CONTRIBUTING.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
# Contributing to OpenSceneFlow
1+
# Contributing to [OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow)
22

33
We want to make contributing to this project as easy and transparent as possible. We welcome any contributions, from bug fixes to new features. If you're interested in adding your own scene flow method, this guide will walk you through the process.
44

55
## Adding a New Method
66

7-
Here is a quick guide to integrating a new method into the OpenSceneFlow codebase.
7+
Here is a quick guide to integrating a new method into the [OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow) codebase.
88

99
### 1. Data Preparation
1010

Dockerfile

Lines changed: 23 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,15 @@
1-
FROM pytorch/pytorch:2.1.0-cuda11.8-cudnn8-devel
1+
# check more: https://hub.docker.com/r/nvidia/cuda
2+
FROM nvidia/cuda:11.7.1-devel-ubuntu20.04
23
ENV DEBIAN_FRONTEND noninteractive
34
LABEL maintainer="Qingwen Zhang <https://kin-zhang.github.io/>"
45

5-
RUN apt update && apt install -y git tmux curl vim rsync libgl1 libglib2.0-0 ca-certificates
6+
RUN apt update && apt install -y git curl vim rsync htop
7+
8+
RUN curl -o ~/miniforge3.sh -LO https://github.com/conda-forge/miniforge/releases/latest/download/miniforge3-Linux-x86_64.sh && \
9+
chmod +x ~/miniforge3.sh && \
10+
~/miniforge3.sh -b -p /opt/conda && \
11+
rm ~/miniforge3.sh && \
12+
/opt/conda/bin/conda clean -ya && /opt/conda/bin/conda init bash && /opt/conda/bin/conda init zsh
613

714
# install zsh and oh-my-zsh
815
RUN apt update && apt install -y wget git zsh tmux vim g++
@@ -14,23 +21,28 @@ RUN sh -c "$(wget -O- https://github.com/deluan/zsh-in-docker/releases/download/
1421
-p https://github.com/zsh-users/zsh-syntax-highlighting
1522

1623
RUN printf "y\ny\ny\n\n" | bash -c "$(curl -fsSL https://raw.githubusercontent.com/Kin-Zhang/Kin-Zhang/main/scripts/setup_ohmyzsh.sh)"
17-
RUN /opt/conda/bin/conda init zsh
1824

1925
# change to conda env
2026
ENV PATH /opt/conda/bin:$PATH
21-
RUN /opt/conda/bin/conda config --set solver libmamba
2227

23-
RUN mkdir -p /home/kin/workspace && cd /home/kin/workspace && git clone https://github.com/Kin-Zhang/OpenSceneFlow
28+
RUN mkdir -p /home/kin/workspace && cd /home/kin/workspace && git clone https://github.com/KTH-RPL/OpenSceneFlow.git
2429
WORKDIR /home/kin/workspace/OpenSceneFlow
30+
RUN apt-get update && apt-get install libgl1 -y
2531

2632
# need read the gpu device info to compile the cuda extension
27-
RUN /opt/conda/bin/pip install -r /home/kin/workspace/OpenSceneFlow/requirements.txt
28-
RUN /opt/conda/bin/pip install FastGeodis --no-build-isolation
29-
RUN /opt/conda/bin/pip install --no-cache-dir -e ./assets/cuda/chamfer3D && /opt/conda/bin/pip install --no-cache-dir -e ./assets/cuda/mmcv
33+
RUN cd /home/kin/workspace/OpenSceneFlow && /opt/conda/bin/conda env create -f environment.yaml
34+
# To make images can run all methods in the codebase
35+
RUN /opt/conda/envs/opensf/bin/pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+cu117.html
36+
RUN /opt/conda/envs/opensf/bin/pip install FastGeodis --no-build-isolation --no-cache-dir
37+
RUN /opt/conda/envs/opensf/bin/pip install mmengine-lite && \
38+
/opt/conda/bin/conda install -n opensf -y pytorch3d -c pytorch3d
39+
40+
# custom cuda library
41+
RUN cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
42+
RUN cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
3043

31-
# environment for dataprocessing includes data-api
32-
RUN /opt/conda/bin/conda env create -f envsftool.yaml
44+
RUN cd /home/kin/workspace/OpenSceneFlow && /opt/conda/bin/conda env create -f envsftool.yaml
3345
RUN /opt/conda/envs/sftool/bin/pip install numpy==1.22
3446

3547
# clean up apt cache
36-
RUN rm -rf /var/lib/apt/lists/* && rm -rf /root/.cache/pip
48+
RUN rm -rf /var/lib/apt/lists/* && rm -rf /root/.cache/pip && /opt/conda/bin/conda clean -ya

README.md

Lines changed: 93 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -11,10 +11,20 @@
1111
OpenSceneFlow is a codebase for point cloud scene flow estimation.
1212
It is also an official implementation of the following papers (sorted by the time of publication):
1313

14+
- **DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method**
15+
*Qingwen Zhang, Xiaomeng Zhu, Yushan Zhang, Yixi Cai, Olov Andersson, Patric Jensfelt*
16+
Preprint; Under review; 2025
17+
[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2508.17054) ] [ [Project](https://github.com/Kin-Zhang/DeltaFlow) ]
18+
1419
- **HiMo: High-Speed Objects Motion Compensation in Point Clouds** (SeFlow++)
1520
*Qingwen Zhang, Ajinkya Khoche, Yi Yang, Li Ling, Sina Sharif Mansouri, Olov Andersson, Patric Jensfelt*
16-
Preprint; Under review; 2025
17-
[ Strategy ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2503.00803) ] [ [Project](https://kin-zhang.github.io/HiMo/) ]
21+
IEEE Transactions on Robotics (**T-RO**) 2025
22+
[ Strategy ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2503.00803) ] [ [Project](https://kin-zhang.github.io/HiMo/) ] &rarr; [here](#seflow-1)
23+
24+
- **VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow**
25+
*Yancong Lin\*, Shiming Wang\*, Liangliang Nan, Julian Kooij, Holger Caesar*
26+
Conference on Computer Vision and Pattern Recognition (**CVPR**) 2025
27+
[ Backbone ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2503.22328) ] [ [Project](https://github.com/tudelft-iv/VoteFlow/) ] &rarr; [here](#VoteFLow)
1828

1929
- **Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation**
2030
*Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im*
@@ -50,21 +60,12 @@ Additionally, *OpenSceneFlow* integrates following excellent works: [ICLR'24 Zer
5060
- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py).
5161
- [x] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance.
5262
- [x] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. SSL optimization-based.
63+
- [ ] [EulerFlow](https://arxiv.org/abs/2410.02031): ICLR 2025. SSL optimization-based. In my plan, haven't coding yet.
5364

5465
</details>
5566

5667
💡: Want to learn how to add your own network in this structure? Check [Contribute section](CONTRIBUTING.md#adding-a-new-method) and know more about the code. Fee free to pull request and your bibtex [here](#cite-us).
5768

58-
---
59-
60-
<!-- 📜 Changelog:
61-
62-
- 🎁 2025/1/28 14:58: Update the codebase to collect all methods in one repository reference [Pointcept](https://github.com/Pointcept/Pointcept) repo.
63-
- 🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, Personally I found `wget` from HuggingFace link is much faster than Zenodo.
64-
- 2024/09/26 16:24: All codes already uploaded and tested. You can to try training directly by downloading (through [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow)/[Zenodo](https://zenodo.org/records/13744999)) demo data or pretrained weight for evaluation.
65-
- 2024/07/24: Merging SeFlow & DeFlow code together, lighter setup and easier running.
66-
- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods. -->
67-
6869
## 0. Installation
6970

7071
There are two ways to install the codebase: directly on your [local machine](#environment-setup) or in a [Docker container](#docker-recommended-for-isolation).
@@ -75,7 +76,7 @@ We use conda to manage the environment, you can install it follow [here](assets/
7576

7677
```bash
7778
git clone --recursive https://github.com/KTH-RPL/OpenSceneFlow.git
78-
cd OpenSceneFlow && mamba env create -f environment.yaml
79+
cd OpenSceneFlow && conda env create -f environment.yaml
7980

8081
# You may need export your LD_LIBRARY_PATH with env lib
8182
# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/kin/mambaforge/lib
@@ -98,23 +99,24 @@ cd /home/kin/workspace/OpenSceneFlow && git pull
9899
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install
99100
cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install
100101
cd /home/kin/workspace/OpenSceneFlow
101-
mamba activate opensf
102+
conda activate opensf
102103
```
103104

104105
If you prefer to build the Docker image by yourself, Check [build-docker-image](assets/README.md#build-docker-image) section for more details.
105106

106107
## 1. Data Preparation
107108

108-
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, and **custom datasets** (more datasets will be added in the future).
109+
Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, **nuScenes**, **ZOD** and **custom datasets** (more datasets will be added in the future).
109110

110111
After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process).
111112

112113
For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)/[Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)).
113114

114115

115116
```bash
116-
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo_data.zip
117-
unzip demo_data.zip -d /home/kin/data/av2/h5py
117+
# around 1.3G
118+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo-data-v2.zip
119+
unzip demo-data-v2.zip -d /home/kin/data/av2/h5py
118120
```
119121

120122
Once extracted, you can directly use this dataset to run the [training script](#2-quick-start) without further processing.
@@ -129,35 +131,34 @@ Some tips before running the code:
129131
And free yourself from trainning, you can download the pretrained weight from [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section).
130132

131133
```bash
132-
mamba activate opensf
134+
conda activate opensf
133135
```
134136

135-
### Flow4D
137+
### Supervised Training
138+
139+
#### Flow4D
136140

137141
Train Flow4D with the leaderboard submit config. [Runtime: Around 18 hours in 4x RTX 3090 GPUs.]
138142

139143
```bash
140-
python train.py model=flow4d lr=1e-3 epochs=15 batch_size=8 num_frames=5 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 0.2]" "point_cloud_range=[-51.2, -51.2, -3.2, 51.2, 51.2, 3.2]"
141-
```
144+
python train.py model=flow4d optimizer.lr=1e-3 epochs=15 batch_size=8 num_frames=5 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 0.2]" "point_cloud_range=[-51.2, -51.2, -3.2, 51.2, 51.2, 3.2]"
142145

143-
Pretrained weight can be downloaded through:
144-
```bash
146+
# Pretrained weight can be downloaded through:
145147
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/flow4d_best.ckpt
146148
```
147149

148-
### SSF
150+
#### SSF
149151

150152
Extra packages needed for SSF model:
151153
```bash
152-
pip install mmengine-lite torch-scatter
153-
# torch-scatter might not working, then reinstall by:
154-
pip install https://data.pyg.org/whl/torch-2.0.0%2Bcu118/torch_scatter-2.1.2%2Bpt20cu118-cp310-cp310-linux_x86_64.whl
154+
pip install mmengine-lite
155+
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+cu117.html
155156
```
156157

157158
Train SSF with the leaderboard submit config. [Runtime: Around 6 hours in 8x A100 GPUs.]
158159

159160
```bash
160-
python train.py model=ssf lr=8e-3 epochs=25 batch_size=64 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 6]" "point_cloud_range=[-51.2, -51.2, -3, 51.2, 51.2, 3]"
161+
python train.py model=ssf optimizer.lr=8e-3 epochs=25 batch_size=64 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 6]" "point_cloud_range=[-51.2, -51.2, -3, 51.2, 51.2, 3]"
161162
```
162163

163164
Pretrained weight can be downloaded through:
@@ -169,48 +170,82 @@ wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_best.ckpt
169170
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_long.ckpt
170171
```
171172

173+
#### DeFlow
172174

173-
### SeFlow
174-
175-
Train SeFlow needed to specify the loss function, we set the config of our best model in the leaderboard. [Runtime: Around 11 hours in 4x A100 GPUs.]
175+
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size&lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
176176

177177
```bash
178-
python train.py model=deflow lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowLoss "add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2"
178+
python train.py model=deflow optimizer.lr=2e-4 epochs=15 batch_size=16 loss_fn=deflowLoss
179+
180+
# Pretrained weight can be downloaded through:
181+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/deflow_best.ckpt
179182
```
180183

181-
Pretrained weight can be downloaded through:
184+
### Feed-Forward Self-Supervised Model Training
185+
186+
Train Feed-forward SSL methods (e.g. SeFlow/SeFlow++/VoteFlow etc), we needed to:
187+
1) process auto-label process.
188+
2) specify the loss function, we set the config here for our best model in the leaderboard.
189+
190+
#### SeFlow
191+
182192
```bash
193+
# [Runtime: Around 11 hours in 4x A100 GPUs.]
194+
python train.py model=deflow optimizer.lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowLoss +ssl_label=seflow_auto "+add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2"
195+
196+
# Pretrained weight can be downloaded through:
183197
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/seflow_best.ckpt
184198
```
185199

186-
### ICP-Flow
200+
#### VoteFLow
201+
Extra pakcges needed for VoteFlow, [pytorch3d](https://pytorch3d.org/) (prefer 0.7.7) and [torch-scatter](https://github.com/rusty1s/pytorch_scatter?tab=readme-ov-file) (prefer 2.1.2):
187202

188-
ICP-Flow is a optimization-based method, you can directly run `eval.py`/`save.py` to get the result.
203+
```bash
204+
# Install Pytorch3d
205+
conda install pytorch3d -c pytorch3d
189206

190-
Extra packages needed for ICP-Flow model:
207+
# Install torch-scatter
208+
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.0.0+cu117.html
209+
```
210+
211+
Train VoteFlow with the leaderboard submit config. [Runtime: Around 32 hours in 4 x V100 GPUs.]
191212
```bash
192-
pip install pytorch3d assets/cuda/histlib
213+
python train.py model=voteflow optimizer.lr=2e-4 +optimizer.scheduler.name=StepLR +optimizer.scheduler.step_size=6 epochs=12 batch_size=4 model.target.m=8 model.target.n=128 loss_fn=seflowLoss "+add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" +ssl_label=seflow_auto
214+
215+
# Pretrained weight can be downloaded through:
216+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/voteflow_best.ckpt
193217
```
194218

195-
Then run as:
219+
220+
#### SeFlow++
221+
196222
```bash
197-
python eval.py model=icpflow
198-
python save.py model=icpflow
223+
# [Runtime: Around 10 hours in 4x A100 GPUs.] for Argoverse 2
224+
python train.py model=deflowpp save_top_model=3 val_every=3 voxel_size="[0.2, 0.2, 6]" point_cloud_range="[-51.2, -51.2, -3, 51.2, 51.2, 3]" num_workers=16 epochs=9 optimizer.lr=2e-4 +optimizer.scheduler.name=StepLR "+add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" +ssl_label=seflowpp_auto loss_fn=seflowppLoss num_frames=3 batch_size=4
225+
226+
# Pretrained weight can be downloaded through:
227+
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/seflowpp_best.ckpt
199228
```
200229

201-
### DeFlow
202230

203-
Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size&lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU)
231+
### Optimization-based Unsupervised Methods
232+
233+
For all optimization-based methods, you can directly run `eval.py`/`save.py` to get the result without training, while the running might take really long time, maybe tmux for run it.
234+
235+
#### ICP-Flow
204236

237+
Extra packages needed for ICP-Flow model:
205238
```bash
206-
python train.py model=deflow lr=2e-4 epochs=15 batch_size=16 loss_fn=deflowLoss
239+
pip install pytorch3d assets/cuda/histlib
207240
```
208241

209-
Pretrained weight can be downloaded through:
242+
Then run as:
210243
```bash
211-
wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/deflow_best.ckpt
244+
python eval.py model=icpflow
245+
python save.py model=icpflow
212246
```
213247

248+
214249
## 3. Evaluation
215250

216251
You can view Wandb dashboard for the training and evaluation results or upload result to online leaderboard.
@@ -219,14 +254,14 @@ Since in training, we save all hyper-parameters and model checkpoints, the only
219254

220255
```bash
221256
# (feed-forward): load ckpt and run it, it will directly prints all metric
222-
python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=val
257+
python eval.py checkpoint=/home/kin/seflow_best.ckpt data_mode=val
223258

224259
# (optimization-based): it might need take really long time, maybe tmux for run it.
225260
python eval.py model=nsfp
226261

227262
# it will output the av2_submit.zip or av2_submit_v2.zip for you to submit to leaderboard
228-
python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=1
229-
python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=2
263+
python eval.py checkpoint=/home/kin/seflow_best.ckpt data_mode=test leaderboard_version=1
264+
python eval.py checkpoint=/home/kin/seflow_best.ckpt data_mode=test leaderboard_version=2
230265
```
231266

232267
### **📊 Range-Wise Metric (New!)**
@@ -243,13 +278,13 @@ In [SSF paper](https://arxiv.org/abs/2501.17821), we introduce a new distance-ba
243278

244279

245280
### Submit result to public leaderboard
246-
To submit your result to the public Leaderboard, if you select `av2_mode=test`, it should be a zip file for you to submit to the leaderboard.
281+
To submit your result to the public Leaderboard, if you select `data_mode=test`, it should be a zip file for you to submit to the leaderboard.
247282
Note: The leaderboard result in DeFlow&SeFlow main paper is [version 1](https://eval.ai/web/challenges/challenge-page/2010/evaluation), as [version 2](https://eval.ai/web/challenges/challenge-page/2210/overview) is updated after DeFlow&SeFlow.
248283

249284
```bash
250285
# since the env may conflict we set new on deflow, we directly create new one:
251-
mamba create -n py37 python=3.7
252-
mamba activate py37
286+
conda create -n py37 python=3.7
287+
conda activate py37
253288
pip install "evalai"
254289

255290
# Step 2: login in eval and register your team
@@ -346,6 +381,13 @@ And our excellent collaborators works contributed to this codebase also:
346381
journal={arXiv preprint arXiv:2501.17821},
347382
year={2025}
348383
}
384+
385+
@inproceedings{lin2025voteflow,
386+
title={VoteFlow: Enforcing Local Rigidity in Self-Supervised Scene Flow},
387+
author={Lin, Yancong and Wang, Shiming and Nan, Liangliang and Kooij, Julian and Caesar, Holger},
388+
booktitle={CVPR},
389+
year={2025},
390+
}
349391
@article{lin2024icp,
350392
title={ICP-Flow: LiDAR Scene Flow Estimation with ICP},
351393
author={Lin, Yancong and Caesar, Holger},

assets/README.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,7 @@ python -c "import torch; print(torch.__version__); print(torch.cuda.is_available
8080
python -c "import lightning.pytorch as pl; print(pl.__version__)"
8181
python -c "from assets.cuda.mmcv import Voxelization, DynamicScatter;print('successfully import on our lite mmcv package')"
8282
python -c "from assets.cuda.chamfer3D import nnChamferDis;print('successfully import on our chamfer3D package')"
83+
python -c "from av2.utils.io import read_feather; print('av2 package ok')"
8384
```
8485

8586

@@ -100,3 +101,6 @@ python -c "from assets.cuda.chamfer3D import nnChamferDis;print('successfully im
100101

101102
3. torch_scatter problem: `OSError: /home/kin/mambaforge/envs/opensf-v2/lib/python3.10/site-packages/torch_scatter/_version_cpu.so: undefined symbol: _ZN5torch3jit17parseSchemaOrNameERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE`
102103
Solved by install the torch-cuda version: `pip install https://data.pyg.org/whl/torch-2.0.0%2Bcu118/torch_scatter-2.1.2%2Bpt20cu118-cp310-cp310-linux_x86_64.whl`
104+
105+
4. cuda package problem: `ValueError(f"Unknown CUDA arch ({arch}) or GPU not supported")`
106+
Solved by [checking GPU compute](https://developer.nvidia.cn/cuda-gpus#compute) then manually assign: `export TORCH_CUDA_ARCH_LIST=8.6`

0 commit comments

Comments
 (0)