|
1 | | -<p align="center"> |
2 | | - <a href="https://github.com/KTH-RPL/OpenSceneFlow"> |
3 | | - <picture> |
4 | | - <img alt="opensceneflow" src="assets/docs/logo.png" width="600"> |
5 | | - </picture><br> |
6 | | - </a> |
7 | | -</p> |
8 | | - |
9 | | -💞 If you find [*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) useful to your research, please cite [**our works** 📖](#cite-us) and [give a star 🌟](https://github.com/KTH-RPL/OpenSceneFlow) as encouragement. (੭ˊ꒳ˋ)੭✧ |
10 | | - |
11 | | -OpenSceneFlow is a codebase for point cloud scene flow estimation. |
12 | | -It is also an official implementation of the following papers (sorted by the time of publication): |
13 | | - |
14 | | -- **HiMo: High-Speed Objects Motion Compensation in Point Clouds** (SeFlow++) |
15 | | -*Qingwen Zhang, Ajinkya Khoche, Yi Yang, Li Ling, Sina Sharif Mansouri, Olov Andersson, Patric Jensfelt* |
16 | | -IEEE Transactions on Robotics (**T-RO**) 2025 |
17 | | -[ Strategy ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2503.00803) ] [ [Project](https://kin-zhang.github.io/HiMo/) ] → [here](#seflow-1) |
18 | | - |
19 | | -- **Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation** |
20 | | -*Jaeyeul Kim, Jungwan Woo, Ukcheol Shin, Jean Oh, Sunghoon Im* |
21 | | -IEEE Robotics and Automation Letters (**RA-L**) 2025 |
22 | | -[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2407.07995) ] [ [Project](https://github.com/dgist-cvlab/Flow4D) ] → [here](#flow4d) |
23 | | - |
24 | | -- **SSF: Sparse Long-Range Scene Flow for Autonomous Driving** |
25 | | -*Ajinkya Khoche, Qingwen Zhang, Laura Pereira Sánchez, Aron Asefaw, Sina Sharif Mansouri and Patric Jensfelt* |
26 | | -International Conference on Robotics and Automation (**ICRA**) 2025 |
27 | | -[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2501.17821) ] [ [Project](https://github.com/KTH-RPL/SSF) ] → [here](#ssf) |
28 | | - |
29 | | -- **SeFlow: A Self-Supervised Scene Flow Method in Autonomous Driving** |
30 | | -*Qingwen Zhang, Yi Yang, Peizheng Li, Olov Andersson, Patric Jensfelt* |
31 | | -European Conference on Computer Vision (**ECCV**) 2024 |
32 | | -[ Strategy ] [ Self-Supervised ] - [ [arXiv](https://arxiv.org/abs/2407.01702) ] [ [Project](https://github.com/KTH-RPL/SeFlow) ] → [here](#seflow) |
33 | | - |
34 | | -- **DeFlow: Decoder of Scene Flow Network in Autonomous Driving** |
35 | | -*Qingwen Zhang, Yi Yang, Heng Fang, Ruoyu Geng, Patric Jensfelt* |
36 | | -International Conference on Robotics and Automation (**ICRA**) 2024 |
37 | | -[ Backbone ] [ Supervised ] - [ [arXiv](https://arxiv.org/abs/2401.16122) ] [ [Project](https://github.com/KTH-RPL/DeFlow) ] → [here](#deflow) |
38 | | - |
39 | | -🎁 <b>One repository, All methods!</b> |
40 | | -Additionally, *OpenSceneFlow* integrates following excellent works: [ICLR'24 ZeroFlow](https://arxiv.org/abs/2305.10424), [ICCV'23 FastNSF](https://arxiv.org/abs/2304.09121), [RA-L'21 FastFlow3D](https://arxiv.org/abs/2103.01306), [NeurIPS'21 NSFP](https://arxiv.org/abs/2111.01253). (More on the way...) |
41 | | - |
42 | | -<details> <summary> Summary of them:</summary> |
43 | | - |
44 | | -- [x] [FastFlow3D](https://arxiv.org/abs/2103.01306): RA-L 2021, a basic backbone model. |
45 | | -- [x] [ZeroFlow](https://arxiv.org/abs/2305.10424): ICLR 2024, their pre-trained weight can covert into our format easily through [the script](tools/zerof2ours.py). |
46 | | -- [x] [NSFP](https://arxiv.org/abs/2111.01253): NeurIPS 2021, faster 3x than original version because of [our CUDA speed up](assets/cuda/README.md), same (slightly better) performance. |
47 | | -- [x] [FastNSF](https://arxiv.org/abs/2304.09121): ICCV 2023. SSL optimization-based. |
48 | | -- [ ] [ICP-Flow](https://arxiv.org/abs/2402.17351): CVPR 2024. SSL optimization-based. Done coding, public after review. |
49 | | - |
50 | | -</details> |
51 | | - |
52 | | -💡: Want to learn how to add your own network in this structure? Check [Contribute section](CONTRIBUTING.md#adding-a-new-method) and know more about the code. Fee free to pull request and your bibtex [here](#cite-us). |
53 | | - |
| 1 | +DeltaFlow |
54 | 2 | --- |
55 | 3 |
|
56 | | -<!-- 📜 Changelog: |
57 | | -
|
58 | | -- 🎁 2025/1/28 14:58: Update the codebase to collect all methods in one repository reference [Pointcept](https://github.com/Pointcept/Pointcept) repo. |
59 | | -- 🤗 2024/11/18 16:17: Update model and demo data download link through HuggingFace, Personally I found `wget` from HuggingFace link is much faster than Zenodo. |
60 | | -- 2024/09/26 16:24: All codes already uploaded and tested. You can to try training directly by downloading (through [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow)/[Zenodo](https://zenodo.org/records/13744999)) demo data or pretrained weight for evaluation. |
61 | | -- 2024/07/24: Merging SeFlow & DeFlow code together, lighter setup and easier running. |
62 | | -- 🔥 2024/07/02: Check the self-supervised version in our new ECCV'24 [SeFlow](https://github.com/KTH-RPL/SeFlow). The 1st ranking in new leaderboard among self-supervise methods. --> |
63 | | - |
64 | | -## 0. Installation |
65 | | - |
66 | | -There are two ways to install the codebase: directly on your [local machine](#environment-setup) or in a [Docker container](#docker-recommended-for-isolation). |
67 | | - |
68 | | -### Environment Setup |
69 | | - |
70 | | -We use conda to manage the environment, you can install it follow [here](assets/README.md#system). Then create the base environment with the following command [5~15 minutes]: |
71 | | - |
72 | | -```bash |
73 | | -git clone --recursive https://github.com/KTH-RPL/OpenSceneFlow.git |
74 | | -cd OpenSceneFlow && mamba env create -f environment.yaml |
75 | | - |
76 | | -# You may need export your LD_LIBRARY_PATH with env lib |
77 | | -# export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/kin/mambaforge/lib |
78 | | -``` |
79 | | -We also provide [requirements.txt](requirements.txt), please check usage through [Dockerfile](Dockerfile). |
80 | | - |
81 | | -### Docker (Recommended for Isolation) |
82 | | - |
83 | | -You always can choose [Docker](https://en.wikipedia.org/wiki/Docker_(software)) which isolated environment and free yourself from installation. Pull the pre-built Docker image or build manually. |
84 | | - |
85 | | -```bash |
86 | | -# option 1: pull from docker hub |
87 | | -docker pull zhangkin/opensf |
88 | | - |
89 | | -# run container |
90 | | -docker run -it --net=host --gpus all -v /dev/shm:/dev/shm -v /home/kin/data:/home/kin/data --name opensf zhangkin/opensf /bin/zsh |
91 | | - |
92 | | -# and better to read your own gpu device info to compile the cuda extension again: |
93 | | -cd /home/kin/workspace/OpenSceneFlow && git pull |
94 | | -cd /home/kin/workspace/OpenSceneFlow/assets/cuda/mmcv && /opt/conda/envs/opensf/bin/python ./setup.py install |
95 | | -cd /home/kin/workspace/OpenSceneFlow/assets/cuda/chamfer3D && /opt/conda/envs/opensf/bin/python ./setup.py install |
96 | | -cd /home/kin/workspace/OpenSceneFlow |
97 | | -mamba activate opensf |
98 | | -``` |
99 | | - |
100 | | -If you prefer to build the Docker image by yourself, Check [build-docker-image](assets/README.md#build-docker-image) section for more details. |
101 | | - |
102 | | -## 1. Data Preparation |
103 | | - |
104 | | -Refer to [dataprocess/README.md](dataprocess/README.md) for dataset download instructions. Currently, we support **Argoverse 2**, **Waymo**, **nuScenes** and **custom datasets** (more datasets will be added in the future). |
105 | | - |
106 | | -After downloading, convert the raw data to `.h5` format for easy training, evaluation, and visualization. Follow the steps in [dataprocess/README.md#process](dataprocess/README.md#process). |
107 | | - |
108 | | -For a quick start, use our **mini processed dataset**, which includes one scene in `train` and `val`. It is pre-converted to `.h5` format with label data ([HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow/blob/main/demo_data.zip)/[Zenodo](https://zenodo.org/records/13744999/files/demo_data.zip)). |
109 | | - |
110 | | - |
111 | | -```bash |
112 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/demo_data.zip |
113 | | -unzip demo_data.zip -d /home/kin/data/av2/h5py |
114 | | -``` |
115 | | - |
116 | | -Once extracted, you can directly use this dataset to run the [training script](#2-quick-start) without further processing. |
117 | | - |
118 | | -## 2. Quick Start |
119 | | - |
120 | | -Some tips before running the code: |
121 | | -* Don't forget to active Python environment before running the code. |
122 | | -* If you want to use [wandb](wandb.ai), replace all `entity="kth-rpl",` to your own entity otherwise tensorboard will be used locally. |
123 | | -* Set correct data path by passing the config, e.g. `train_data=/home/kin/data/av2/h5py/demo/train val_data=/home/kin/data/av2/h5py/demo/val`. |
124 | | - |
125 | | -And free yourself from trainning, you can download the pretrained weight from [HuggingFace](https://huggingface.co/kin-zhang/OpenSceneFlow) and we provided the detail `wget` command in each model section. For optimization-based method, it's train-free so you can directly run with [3. Evaluation](#3-evaluation) (check more in the evaluation section). |
126 | | - |
127 | | -```bash |
128 | | -mamba activate opensf |
129 | | -``` |
130 | | - |
131 | | -### Flow4D |
132 | | - |
133 | | -Train Flow4D with the leaderboard submit config. [Runtime: Around 18 hours in 4x RTX 3090 GPUs.] |
134 | | - |
135 | | -```bash |
136 | | -python train.py model=flow4d optimizer.lr=1e-3 epochs=15 batch_size=8 num_frames=5 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 0.2]" "point_cloud_range=[-51.2, -51.2, -3.2, 51.2, 51.2, 3.2]" |
137 | | - |
138 | | -# Pretrained weight can be downloaded through: |
139 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/flow4d_best.ckpt |
140 | | -``` |
141 | | - |
142 | | -### SSF |
143 | | - |
144 | | -Extra pakcges needed for SSF model: |
145 | | -```bash |
146 | | -pip install mmengine-lite torch-scatter |
147 | | -# torch-scatter might not working, then reinstall by: |
148 | | -pip install https://data.pyg.org/whl/torch-2.0.0%2Bcu118/torch_scatter-2.1.2%2Bpt20cu118-cp310-cp310-linux_x86_64.whl |
149 | | -``` |
150 | | - |
151 | | -Train SSF with the leaderboard submit config. [Runtime: Around 6 hours in 8x A100 GPUs.] |
152 | | - |
153 | | -```bash |
154 | | -python train.py model=ssf optimizer.lr=8e-3 epochs=25 batch_size=64 loss_fn=deflowLoss "voxel_size=[0.2, 0.2, 6]" "point_cloud_range=[-51.2, -51.2, -3, 51.2, 51.2, 3]" |
155 | | -``` |
156 | | - |
157 | | -Pretrained weight can be downloaded through: |
158 | | -```bash |
159 | | -# the leaderboard weight |
160 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_best.ckpt |
161 | | - |
162 | | -# the long-range weight: |
163 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/ssf_long.ckpt |
164 | | -``` |
165 | | - |
166 | | -### Feed-Forward Self-Supervised Model Training |
167 | | - |
168 | | -Train SeFlow/SeFlow++ needed to: |
169 | | -1) process auto-label process. |
170 | | -2) specify the loss function, we set the config of our best model in the leaderboard. |
171 | | - |
172 | | -#### SeFlow |
173 | | - |
174 | | -```bash |
175 | | -# [Runtime: Around 11 hours in 4x A100 GPUs.] |
176 | | -python train.py model=deflow optimizer.lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowLoss +ssl_label=seflow_auto "+add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2" |
177 | | - |
178 | | -# Pretrained weight can be downloaded through: |
179 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/seflow_best.ckpt |
180 | | -``` |
181 | | - |
182 | | -#### SeFlow++ |
183 | | - |
184 | | -```bash |
185 | | -# [Runtime: Around ? hours in ? GPUs.] |
186 | | -python train.py model=deflowpp optimizer.lr=2e-4 epochs=9 batch_size=16 loss_fn=seflowppLoss +ssl_label=seflowpp_auto "+add_seloss={chamfer_dis: 1.0, static_flow_loss: 1.0, dynamic_chamfer_dis: 1.0, cluster_based_pc0pc1: 1.0}" "model.target.num_iters=2" num_frames=3 |
187 | | - |
188 | | -# Pretrained weight can be downloaded through: |
189 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/seflowpp_best.ckpt |
190 | | -``` |
191 | | - |
192 | | -### DeFlow |
193 | | - |
194 | | -Train DeFlow with the leaderboard submit config. [Runtime: Around 6-8 hours in 4x A100 GPUs.] Please change `batch_size&lr` accoordingly if you don't have enough GPU memory. (e.g. `batch_size=6` for 24GB GPU) |
195 | | - |
196 | | -```bash |
197 | | -python train.py model=deflow optimizer.lr=2e-4 epochs=15 batch_size=16 loss_fn=deflowLoss |
198 | | - |
199 | | -# Pretrained weight can be downloaded through: |
200 | | -wget https://huggingface.co/kin-zhang/OpenSceneFlow/resolve/main/deflow_best.ckpt |
201 | | -``` |
202 | | - |
203 | | -## 3. Evaluation |
204 | | - |
205 | | -You can view Wandb dashboard for the training and evaluation results or upload result to online leaderboard. |
206 | | - |
207 | | -Since in training, we save all hyper-parameters and model checkpoints, the only thing you need to do is to specify the checkpoint path. Remember to set the data path correctly also. |
208 | | - |
209 | | -```bash |
210 | | -# (feed-forward): load ckpt and run it, it will directly prints all metric |
211 | | -python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=val |
212 | | - |
213 | | -# (optimization-based): it might need take really long time, maybe tmux for run it. |
214 | | -python eval.py model=nsfp |
215 | | - |
216 | | -# it will output the av2_submit.zip or av2_submit_v2.zip for you to submit to leaderboard |
217 | | -python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=1 |
218 | | -python eval.py checkpoint=/home/kin/seflow_best.ckpt av2_mode=test leaderboard_version=2 |
219 | | -``` |
220 | | - |
221 | | -### **📊 Range-Wise Metric (New!)** |
222 | | -In [SSF paper](https://arxiv.org/abs/2501.17821), we introduce a new distance-based evaluation metric for scene flow estimation. Below is an example output for SSF with point_cloud_range to 204.8m and voxel_size=0.2m. Check more long-range result in [SSF paper](https://arxiv.org/abs/2501.17821). |
223 | | - |
224 | | -| Distance | Static | Dynamic | NumPointsStatic | NumPointsDynamic | |
225 | | -|-----------|----------|----------|-----------------|------------------| |
226 | | -| 0-35 | 0.00836 | 0.11546 | 3.33e+08 | 1.57e+07 | |
227 | | -| 35-50 | 0.00910 | 0.16805 | 4.40e+07 | 703125 | |
228 | | -| 50-75 | 0.01107 | 0.20448 | 3.25e+07 | 395398 | |
229 | | -| 75-100 | 0.01472 | 0.24133 | 1.31e+07 | 145281 | |
230 | | -| 100-inf | 0.01970 | 0.30536 | 1.32e+07 | 171865 | |
231 | | -| **Mean** | 0.01259 | 0.20693 | NaN | NaN | |
232 | | - |
233 | | - |
234 | | -### Submit result to public leaderboard |
235 | | -To submit your result to the public Leaderboard, if you select `av2_mode=test`, it should be a zip file for you to submit to the leaderboard. |
236 | | -Note: The leaderboard result in DeFlow&SeFlow main paper is [version 1](https://eval.ai/web/challenges/challenge-page/2010/evaluation), as [version 2](https://eval.ai/web/challenges/challenge-page/2210/overview) is updated after DeFlow&SeFlow. |
237 | | - |
238 | | -```bash |
239 | | -# since the env may conflict we set new on deflow, we directly create new one: |
240 | | -mamba create -n py37 python=3.7 |
241 | | -mamba activate py37 |
242 | | -pip install "evalai" |
243 | | - |
244 | | -# Step 2: login in eval and register your team |
245 | | -evalai set-token <your token> |
246 | | - |
247 | | -# Step 3: Copy the command pop above and submit to leaderboard |
248 | | -evalai challenge 2010 phase 4018 submit --file av2_submit.zip --large --private |
249 | | -evalai challenge 2210 phase 4396 submit --file av2_submit_v2.zip --large --private |
250 | | -``` |
251 | | - |
252 | | -## 4. Visualization |
253 | | - |
254 | | -We provide a script to visualize the results of the model also. You can specify the checkpoint path and the data path to visualize the results. The step is quite similar to evaluation. |
255 | | - |
256 | | -```bash |
257 | | -# (feed-forward): load ckpt |
258 | | -python save.py checkpoint=/home/kin/seflow_best.ckpt dataset_path=/home/kin/data/av2/preprocess_v2/sensor/vis |
259 | | -# (optimization-based): change another model by passing model name. |
260 | | -python eval.py model=nsfp dataset_path=/home/kin/data/av2/h5py/demo/val |
261 | | - |
262 | | -# The output of above command will be like: |
263 | | -Model: DeFlow, Checkpoint from: /home/kin/model_zoo/v2/seflow_best.ckpt |
264 | | -We already write the flow_est into the dataset, please run following commend to visualize the flow. Copy and paste it to your terminal: |
265 | | -python tools/visualization.py --res_name 'seflow_best' --data_dir /home/kin/data/av2/preprocess_v2/sensor/vis |
266 | | -Enjoy! ^v^ ------ |
267 | | - |
268 | | -# Then run the command in the terminal: |
269 | | -python tools/visualization.py --res_name 'seflow_best' --data_dir /home/kin/data/av2/preprocess_v2/sensor/vis |
270 | | -``` |
271 | | - |
272 | | -https://github.com/user-attachments/assets/f031d1a2-2d2f-4947-a01f-834ed1c146e6 |
273 | | - |
274 | | -For exporting easy comparsion with ground truth and other methods, we also provided multi-visulization open3d window: |
275 | | -```bash |
276 | | -python tools/visualization.py --mode mul --res_name "['flow', 'seflow_best']" --data_dir /home/kin/data/av2/preprocess_v2/sensor/vis |
277 | | -``` |
278 | | - |
279 | | -Or another way to interact with [rerun](https://github.com/rerun-io/rerun) but please only vis scene by scene, not all at once. |
280 | | - |
281 | | -```bash |
282 | | -python tools/visualization_rerun.py --data_dir /home/kin/data/av2/h5py/demo/train --res_name "['flow', 'deflow']" |
283 | | -``` |
284 | | - |
285 | | -https://github.com/user-attachments/assets/07e8d430-a867-42b7-900a-11755949de21 |
286 | | - |
| 4 | +Status: Under Review 🚀 |
287 | 5 |
|
288 | | -## Cite Us |
| 6 | +We appreciate your interest! The work is actively under peer review. |
289 | 7 |
|
290 | | -[*OpenSceneFlow*](https://github.com/KTH-RPL/OpenSceneFlow) is originally designed by [Qingwen Zhang](https://kin-zhang.github.io/) from DeFlow and SeFlow. |
291 | | -If you find it useful, please cite our works: |
| 8 | +We will release the full codebase, trained models, supplementary materials and all baselines upon acceptance and publication. It may around early October 2025. |
292 | 9 |
|
293 | | -```bibtex |
294 | | -@inproceedings{zhang2024seflow, |
295 | | - author={Zhang, Qingwen and Yang, Yi and Li, Peizheng and Andersson, Olov and Jensfelt, Patric}, |
296 | | - title={{SeFlow}: A Self-Supervised Scene Flow Method in Autonomous Driving}, |
297 | | - booktitle={European Conference on Computer Vision (ECCV)}, |
298 | | - year={2024}, |
299 | | - pages={353–369}, |
300 | | - organization={Springer}, |
301 | | - doi={10.1007/978-3-031-73232-4_20}, |
302 | | -} |
303 | | -@inproceedings{zhang2024deflow, |
304 | | - author={Zhang, Qingwen and Yang, Yi and Fang, Heng and Geng, Ruoyu and Jensfelt, Patric}, |
305 | | - booktitle={2024 IEEE International Conference on Robotics and Automation (ICRA)}, |
306 | | - title={{DeFlow}: Decoder of Scene Flow Network in Autonomous Driving}, |
307 | | - year={2024}, |
308 | | - pages={2105-2111}, |
309 | | - doi={10.1109/ICRA57147.2024.10610278} |
310 | | -} |
311 | | -@article{zhang2025himo, |
312 | | - title={HiMo: High-Speed Objects Motion Compensation in Point Clouds}, |
313 | | - author={Zhang, Qingwen and Khoche, Ajinkya and Yang, Yi and Ling, Li and Sina, Sharif Mansouri and Andersson, Olov and Jensfelt, Patric}, |
314 | | - year={2025}, |
315 | | - journal={arXiv preprint arXiv:2503.00803}, |
316 | | -} |
317 | | -``` |
| 10 | +Stay tuned and feel free to star ⭐ this repository to get notified when we publish. |
318 | 11 |
|
319 | | -And our excellent collaborators works contributed to this codebase also: |
320 | 12 |
|
321 | | -```bibtex |
322 | | -@article{kim2025flow4d, |
323 | | - author={Kim, Jaeyeul and Woo, Jungwan and Shin, Ukcheol and Oh, Jean and Im, Sunghoon}, |
324 | | - journal={IEEE Robotics and Automation Letters}, |
325 | | - title={Flow4D: Leveraging 4D Voxel Network for LiDAR Scene Flow Estimation}, |
326 | | - year={2025}, |
327 | | - volume={10}, |
328 | | - number={4}, |
329 | | - pages={3462-3469}, |
330 | | - doi={10.1109/LRA.2025.3542327} |
331 | | -} |
332 | | -@article{khoche2025ssf, |
333 | | - title={SSF: Sparse Long-Range Scene Flow for Autonomous Driving}, |
334 | | - author={Khoche, Ajinkya and Zhang, Qingwen and Sanchez, Laura Pereira and Asefaw, Aron and Mansouri, Sina Sharif and Jensfelt, Patric}, |
335 | | - journal={arXiv preprint arXiv:2501.17821}, |
336 | | - year={2025} |
337 | | -} |
338 | | -``` |
| 13 | +🔗 While you wait, explore our codebase: [KTH-RPL/OpenSceneFlow](https://github.com/KTH-RPL/OpenSceneFlow). |
339 | 14 |
|
340 | | -Thank you for your support! ❤️ |
341 | | -Feel free to contribute your method and add your bibtex here by pull request! |
| 15 | +- 2025/08/24: I'm updating some codes for early release. Please be patient. |
| 16 | +- 2025/08/24: Updating train data augmentation as illustrated in the DeltaFlow paper. |
342 | 17 |
|
343 | | -❤️: [BucketedSceneFlowEval](https://github.com/kylevedder/BucketedSceneFlowEval); [Pointcept](https://github.com/Pointcept/Pointcept); [OpenPCSeg](https://github.com/BAI-Yeqi/OpenPCSeg); [ZeroFlow](https://github.com/kylevedder/zeroflow) ... |
0 commit comments