diff --git a/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb b/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb index 1c51c2f39..43e63bdd0 100644 --- a/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb +++ b/3d_segmentation/swin_unetr_brats21_segmentation_3d.ipynb @@ -455,7 +455,7 @@ "source": [ "## Create Swin UNETR model\n", "\n", - "In this scetion, we create Swin UNETR model for the 3-class brain tumor semantic segmentation. We use a feature size of 48. We also use gradient checkpointing (use_checkpoint) for more memory-efficient training. However, use_checkpoint for faster training if enough GPU memory is available. " + "In this section, we create Swin UNETR model for the 3-class brain tumor semantic segmentation. We use a feature size of 48. We also use gradient checkpointing (use_checkpoint) for more memory-efficient training. However, use_checkpoint for faster training if enough GPU memory is available. " ] }, { diff --git a/3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb b/3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb index 4bd639db5..e2fb78580 100644 --- a/3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb +++ b/3d_segmentation/swin_unetr_btcv_segmentation_3d.ipynb @@ -96,7 +96,7 @@ "source": [ "# Pre-trained Swin UNETR Encoder\n", "\n", - "We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Tranformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n", + "We use weights from self-supervised pre-training of Swin UNETR encoder (3D Swin Transformer) on a cohort of 5050 CT scans from publicly available datasets. The encoder is pre-trained using reconstructin, rotation prediction and contrastive learning pre-text tasks as shown below. For more details, please refer to [1] (CVPR paper) and see this [repository](https://github.com/Project-MONAI/research-contributions/tree/main/SwinUNETR/Pretrain). \n", "\n", "Please download the pre-trained weights from this [link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) and place it in the root directory of this tutorial. \n", "\n", @@ -452,7 +452,7 @@ "source": [ "### Initialize Swin UNETR encoder from self-supervised pre-trained weights\n", "\n", - "In this section, we intialize the Swin UNETR encoder from pre-trained weights. The weights can be downloaded using the wget command below, or by following [this link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) to GitHub. If training from scratch is desired, please skip this section." + "In this section, we initialize the Swin UNETR encoder from pre-trained weights. The weights can be downloaded using the wget command below, or by following [this link](https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/model_swinvit.pt) to GitHub. If training from scratch is desired, please skip this section." ] }, { diff --git a/auto3dseg/docs/gpu_opt.md b/auto3dseg/docs/gpu_opt.md index 0eb2171cf..ed201b4cf 100644 --- a/auto3dseg/docs/gpu_opt.md +++ b/auto3dseg/docs/gpu_opt.md @@ -8,7 +8,7 @@ Sometimes the low GPU utilization is because that GPU capacities is not fully ut Our proposed solution is capable to automatically estimate hyper-parameters in model training configurations maximizing utilities of the available GPU capacities. The solution is leveraging hyper-parameter optimization algorithms to search for optimital hyper-parameters with any given GPU devices. -The following hyper-paramters in model training configurations are optimized in the process. +The following hyper-parameters in model training configurations are optimized in the process. 1. **num_images_per_batch:** Batch size determines how many images are in each mini-batch and how many training iterations per epoch. Large batch size can reduce training time per epoch and increase GPU memory usage with decent CPU capacities for I/O; 2. **num_sw_batch_size:** Batch size in sliding-window inference directly relates to how many patches are in one pass of model feedforward operation. Large batch size in sliding-window inference can reduce overall inference time and increase GPU memory usage; diff --git a/deployment/Triton/models/mednist_class/1/model.py b/deployment/Triton/models/mednist_class/1/model.py index 7e420f9de..cdcbeaff8 100644 --- a/deployment/Triton/models/mednist_class/1/model.py +++ b/deployment/Triton/models/mednist_class/1/model.py @@ -89,7 +89,7 @@ def initialize(self, args): """ `initialize` is called only once when the model is being loaded. Implementing `initialize` function is optional. This function allows - the model to intialize any state associated with this model. + the model to initialize any state associated with this model. """ # Pull model from google drive diff --git a/deployment/Triton/models/monai_covid/1/model.py b/deployment/Triton/models/monai_covid/1/model.py index 3e8f6442c..f87f793e9 100644 --- a/deployment/Triton/models/monai_covid/1/model.py +++ b/deployment/Triton/models/monai_covid/1/model.py @@ -79,7 +79,7 @@ def initialize(self, args): """ `initialize` is called only once when the model is being loaded. Implementing `initialize` function is optional. This function allows - the model to intialize any state associated with this model. + the model to initialize any state associated with this model. """ # Pull model from google drive diff --git a/generation/anomaly_detection/anomaly_detection_with_transformers.ipynb b/generation/anomaly_detection/anomaly_detection_with_transformers.ipynb index 01ebc6f00..419241a92 100644 --- a/generation/anomaly_detection/anomaly_detection_with_transformers.ipynb +++ b/generation/anomaly_detection/anomaly_detection_with_transformers.ipynb @@ -1313,7 +1313,7 @@ "# loop over tokens\n", "for i in range(1, latent.shape[1]):\n", " if mask_flattened[i - 1]:\n", - " # if token is low probability, replace with tranformer's most likely token\n", + " # if token is low probability, replace with Transformer's most likely token\n", " logits = transformer_model(latent_healed[:, :i])\n", " probs = F.softmax(logits, dim=-1)\n", " # don't sample beginning of sequence token\n", diff --git a/generation/maisi/scripts/sample.py b/generation/maisi/scripts/sample.py index fb1d0f425..b0abeea9a 100644 --- a/generation/maisi/scripts/sample.py +++ b/generation/maisi/scripts/sample.py @@ -597,7 +597,7 @@ def __init__( label_dict = json.load(f) self.all_anatomy_size_condtions_json = all_anatomy_size_condtions_json - # intialize variables + # initialize variables self.body_region = body_region self.anatomy_list = [label_dict[organ] for organ in anatomy_list] self.all_mask_files_json = all_mask_files_json diff --git a/monailabel/monailabel_HelloWorld_radiology_3dslicer.ipynb b/monailabel/monailabel_HelloWorld_radiology_3dslicer.ipynb index 3660c7e8e..63a3a9cc2 100644 --- a/monailabel/monailabel_HelloWorld_radiology_3dslicer.ipynb +++ b/monailabel/monailabel_HelloWorld_radiology_3dslicer.ipynb @@ -19,7 +19,7 @@ "\n", "***The Active Learning Process with MONAI Label***\n", "\n", - "In this notebook, we provide a hello world example of MONAI Label use case. Using Spleen segmentation in Radiology app as the demonstration, 3D Slicer as the client viewer, we show how MONAI Label workflow serves as interacitve AI-Assisted tool for labeling CT scans. \n", + "In this notebook, we provide a hello world example of MONAI Label use case. Using Spleen segmentation in Radiology app as the demonstration, 3D Slicer as the client viewer, we show how MONAI Label workflow serves as interactive AI-Assisted tool for labeling CT scans. \n", "\n", "![workflow](./figures/monailabel_radiology_3dslicer/teaser_img.png)\n", "\n", diff --git a/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb b/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb index e2771a04c..311ecf53d 100644 --- a/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb +++ b/multimodal/openi_multilabel_classification_transchex/transchex_openi_multilabel_classification.ipynb @@ -28,7 +28,7 @@ "An example of images and corresponding reports in Open-I dataset is presented as follows [2]:\n", "![image](../../figures/openi_sample.png)\n", "\n", - "In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extraxts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n", + "In this tutorial, we use the TransCheX model with 2 layers for each of vision, language mixed modality encoders respectively. As an input to the TransCheX, we use the patient **report** and corresponding **chest X-ray image**. The image itself will be divided into non-overlapping patches with a specified patch resolution and projected into an embedding space. Similarly the reports are tokenized and projected into their respective embedding space. The language and vision encoders seperately encode their respective features from the projected embeddings in each modality. Furthmore, the output of vision and language encoders are fed into a mixed modality encoder which extracts mutual information. The output of the mixed modality encoder is then utilized for the classification application. \n", "\n", "[1] : \"Hatamizadeh et al.,TransCheX: Self-Supervised Pretraining of Vision-Language Transformers for Chest X-ray Analysis\"\n", "\n", diff --git a/self_supervised_pretraining/vit_unetr_ssl/ssl_train.ipynb b/self_supervised_pretraining/vit_unetr_ssl/ssl_train.ipynb index df492ee5c..92198ef9d 100644 --- a/self_supervised_pretraining/vit_unetr_ssl/ssl_train.ipynb +++ b/self_supervised_pretraining/vit_unetr_ssl/ssl_train.ipynb @@ -260,7 +260,7 @@ "\n", "model = model.to(device)\n", "\n", - "# Define Hyper-paramters for training loop\n", + "# Define Hyper-parameters for training loop\n", "max_epochs = 500\n", "val_interval = 2\n", "batch_size = 4\n",