Skip to content

Commit 18b4430

Browse files
committed
update readme files
1 parent d60baa1 commit 18b4430

5 files changed

Lines changed: 17 additions & 18 deletions

File tree

examples/JSRT/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -22,8 +22,7 @@ If you don't want to train the model by yourself, you can download a pre-trained
2222
1. Edit `config/train_test.cfg` by setting the value of `root_dir` as your `JSRT_root`. Then add the path of `PyMIC` to `PYTHONPATH` environment variable (if you haven't done this) and start to train by running:
2323

2424
```bash
25-
export PYTHONPATH=$PYTHONPATH:your_path_of_PyMIC
26-
python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
25+
pymic_net_run train config/train_test.cfg
2726
```
2827

2928
2. During training or after training, run `tensorboard --logdir model/unet` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively. We can observe some over-fitting on the training set.
@@ -36,13 +35,13 @@ python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
3635

3736
```bash
3837
mkdir result
39-
python ../../pymic/train_infer/train_infer.py test config/train_test.cfg
38+
pymic_net_run test config/train_test.cfg
4039
```
4140

4241
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_list` as your `JSRT_root/label`, and run the following command to obtain quantitative evaluation results in terms of dice.
4342

4443
```bash
45-
python ../../pymic/util/evaluation.py config/evaluation.cfg
44+
pymic_evaluate config/evaluation.cfg
4645
```
4746

4847
The obtained dice score by default setting should be close to 94.88+/-2.72%. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 2.08+/-1.06 pixels.

examples/JSRT2/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,7 @@ To use the customized CNN, we also write a customized main function in `jsrt_tra
1616
1. Edit `config/train_test.cfg` by setting the value of `root_dir` as your `JSRT_root`, and start to train by running:
1717

1818
```bash
19-
export PYTHONPATH=$PYTHONPATH:your_path_of_PyMIC
20-
python jsrt_train_infer.py train config/train_test.cfg
19+
python jsrt_net_run.py train config/train_test.cfg
2120
```
2221

2322
2. During training or after training, run `tensorboard --logdir model/my_net2d` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively.
@@ -29,13 +28,13 @@ python jsrt_train_infer.py train config/train_test.cfg
2928
1. Edit the `testing` section in `config/train_test.cfg`, and run the following command for testing:
3029

3130
```bash
32-
python jsrt_train_infer.py test config/train_test.cfg
31+
python jsrt_net_run.py test config/train_test.cfg
3332
```
3433

3534
2. Edit `config/evaluation.cfg` by setting `ground_truth_folder_list` as your `JSRT_root/label`, and run the following command to obtain quantitative evaluation results in terms of dice.
3635

3736
```
38-
python ../../pymic/util/evaluation.py config/evaluation.cfg
37+
pymic_evaluate config/evaluation.cfg
3938
```
4039

4140
The obtained dice score by default setting should be close to 94.61+/-2.84%.
Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,11 +2,11 @@
22
from __future__ import print_function, division
33

44
import sys
5-
from pymic.train_infer.train_infer import TrainInferAgent
5+
from pymic.net_run.net_run import TrainInferAgent
66
from pymic.util.parse_config import parse_config
77
from my_net2d import MyUNet2D
88

9-
if __name__ == "__main__":
9+
def main():
1010
if(len(sys.argv) < 3):
1111
print('Number of arguments should be 3. e.g.')
1212
print(' python train_infer.py train config.cfg')
@@ -27,3 +27,6 @@
2727
agent = TrainInferAgent(config, stage)
2828
agent.set_network(net)
2929
agent.run()
30+
31+
if __name__ == "__main__":
32+
main()

examples/fetal_hc/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,7 @@ If you don't want to train the model by yourself, you can download a pre-trained
1919
1. Edit `config/train_test.cfg` by setting the value of `root_dir` as your `HC_root`. Then add the path of `PyMIC` to `PYTHONPATH` environment variable (if you haven't done this) and start to train by running:
2020

2121
```bash
22-
export PYTHONPATH=$PYTHONPATH:your_path_of_PyMIC
23-
python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
22+
pymic_net_run train config/train_test.cfg
2423
```
2524

2625
2. During training or after training, run `tensorboard --logdir model/unet` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where red and blue curves are for training set and validation set respectively.
@@ -33,13 +32,13 @@ python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
3332

3433
```bash
3534
mkdir result
36-
python ../../pymic/train_infer/train_infer.py test config/train_test.cfg
35+
pymic_net_run test config/train_test.cfg
3736
```
3837

3938
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_list` as your `HC_root/training_set_label`, and run the following command to obtain quantitative evaluation results in terms of dice.
4039

4140
```bash
42-
python ../../pymic/util/evaluation.py config/evaluation.cfg
41+
pymic_evaluate config/evaluation.cfg
4342
```
4443

4544
The obtained dice score by default setting should be close to 97.41+/-1.95%. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 5.46+/-6.36 pixels. We find that the assd values are high for the segmentation results. You can try your efforts to improve the performance with different networks or training strategies by changing the configuration file `config/train_test.cfg`.

examples/prostate/README.md

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -17,8 +17,7 @@ If you don't want to train the model by yourself, you can download a pre-trained
1717
1. Edit `config/train_test.cfg` by setting the value of `root_dir` as your `data/promise12/preprocess`. Then add the path of `PyMIC` to `PYTHONPATH` environment variable (if you haven't done this) and start to train by running:
1818

1919
```bash
20-
export PYTHONPATH=$PYTHONPATH:your_path_of_PyMIC
21-
python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
20+
pymic_net_run train config/train_test.cfg
2221
```
2322

2423
2. During training or after training, run `tensorboard --logdir model` and you will see a link in the output, such as `http://your-computer:6006`. Open the link in the browser and you can observe the average Dice score and loss during the training stage, such as shown in the following images, where blue and red curves are for training set and validation set respectively.
@@ -31,13 +30,13 @@ python ../../pymic/train_infer/train_infer.py train config/train_test.cfg
3130

3231
```bash
3332
mkdir result
34-
python ../../pymic/train_infer/train_infer.py test config/train_test.cfg
33+
pymic_net_run test config/train_test.cfg
3534
```
3635

3736
2. Then edit `config/evaluation.cfg` by setting `ground_truth_folder_list` as your `data/promise12/preprocess/label`, and run the following command to obtain quantitative evaluation results in terms of dice.
3837

3938
```bash
40-
python ../../pymic/util/evaluation.py config/evaluation.cfg
39+
pymic_evaluate config/evaluation.cfg
4140
```
4241

4342
The obtained dice score by default setting should be close to 87.28+/-2.58%. You can set `metric = assd` in `config/evaluation.cfg` and run the evaluation command again. You will get average symmetric surface distance (assd) evaluation results. By default setting, the assd is close to 1.83+/-0.75 pixels. You can try your efforts to improve the performance with different networks or training strategies by changing the configuration file `config/train_test.cfg`.

0 commit comments

Comments
 (0)