Skip to content

Commit 56d09f7

Browse files
committed
doc: pool-based sampling example updated
1 parent cacef8b commit 56d09f7

2 files changed

Lines changed: 21 additions & 18 deletions

File tree

docs/source/content/examples/pool-based_sampling.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
"cell_type": "markdown",
2323
"metadata": {},
2424
"source": [
25-
"### Enforce a reproducible result across runs"
25+
"To enforce a reproducible result across runs, we set a random seed."
2626
]
2727
},
2828
{
@@ -42,9 +42,9 @@
4242
"cell_type": "markdown",
4343
"metadata": {},
4444
"source": [
45-
"### Load our `iris` dataset\n",
45+
"## The dataset\n",
4646
"\n",
47-
"For more information on the iris dataset, see:\n",
47+
"Now we load the dataset. In this example, we are going to use the famous Iris dataset. For more information on the iris dataset, see:\n",
4848
" - [The dataset documentation on Wikipedia](https://en.wikipedia.org/wiki/Iris_flower_data_set)\n",
4949
" - [The scikit-learn interface](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html)"
5050
]
@@ -66,7 +66,7 @@
6666
"cell_type": "markdown",
6767
"metadata": {},
6868
"source": [
69-
"### Apply PCA onto our features and extract the first 2 principle components"
69+
"For visualization purposes, we apply PCA to the original dataset."
7070
]
7171
},
7272
{
@@ -86,7 +86,7 @@
8686
"cell_type": "markdown",
8787
"metadata": {},
8888
"source": [
89-
"### Visualize the principle components"
89+
"This is how the dataset looks like."
9090
]
9191
},
9292
{
@@ -124,9 +124,7 @@
124124
"cell_type": "markdown",
125125
"metadata": {},
126126
"source": [
127-
"### Partition our `iris` dataset\n",
128-
"\n",
129-
"We first specify our training set $\\mathcal{L}$ consisting of 3 random examples. The remaining examples go to our \"unlabeled\" pool $\\mathcal{U}$."
127+
"Now we partition our `iris` dataset into a training set $\\mathcal{L}$ and $\\mathcal{U}$. We first specify our training set $\\mathcal{L}$ consisting of 3 random examples. The remaining examples go to our \"unlabeled\" pool $\\mathcal{U}$."
130128
]
131129
},
132130
{
@@ -151,7 +149,9 @@
151149
"cell_type": "markdown",
152150
"metadata": {},
153151
"source": [
154-
"## Define our models"
152+
"## Active learning with pool-based sampling\n",
153+
"\n",
154+
"For the classification, we are going to use a simple k-nearest neighbors classifier. In this step, we are also going to initialize the ```ActiveLearner```."
155155
]
156156
},
157157
{
@@ -172,7 +172,7 @@
172172
"cell_type": "markdown",
173173
"metadata": {},
174174
"source": [
175-
"## Predict class labels based on our limited dataset $\\mathcal{L}$"
175+
"Let's see how our classifier performs on the initial training set!"
176176
]
177177
},
178178
{
@@ -242,7 +242,7 @@
242242
"\n",
243243
"As we can see, our model is unable to properly learn the underlying data distribution. All of its predictions are for the third class label, and as such it is only as competitive as defaulting its predictions to a single class – if only we had more data!\n",
244244
"\n",
245-
"Below, we tune our classifier by allowing it to query 20 instances it hasn't seen before. Using uncertainty sampling, our classifier aims to reduce the amount of uncertainty in its predictions using a variety of measures — see the documentation for more on specific [classification uncertainty measures](https://cosmic-cortex.github.io/modAL/Uncertainty-sampling#uncertainty). With each requested query, we remove that record from our pool $\\mathcal{U}$ and record our model's accuracy on the raw dataset."
245+
"Below, we tune our classifier by allowing it to query 20 instances it hasn't seen before. Using uncertainty sampling, our classifier aims to reduce the amount of uncertainty in its predictions using a variety of measures — see the documentation for more on specific [classification uncertainty measures](https://modal-python.readthedocs.io/en/latest/content/query_strategies/Uncertainty-sampling.html). With each requested query, we remove that record from our pool $\\mathcal{U}$ and record our model's accuracy on the raw dataset."
246246
]
247247
},
248248
{
@@ -388,7 +388,7 @@
388388
"name": "python",
389389
"nbconvert_exporter": "python",
390390
"pygments_lexer": "ipython3",
391-
"version": "3.6.6"
391+
"version": "3.6.5"
392392
}
393393
},
394394
"nbformat": 4,

docs/source/content/examples/ranked_batch_mode.ipynb

Lines changed: 9 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,9 @@
5151
"source": [
5252
"## The dataset\n",
5353
"\n",
54-
"Now we load the dataset. In this example, we are going to use the famous Iris dataset."
54+
"Now we load the dataset. In this example, we are going to use the famous Iris dataset. For more information on the iris dataset, see:\n",
55+
" - [The dataset documentation on Wikipedia](https://en.wikipedia.org/wiki/Iris_flower_data_set)\n",
56+
" - [The scikit-learn interface](http://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_iris.html)"
5557
]
5658
},
5759
{
@@ -129,7 +131,7 @@
129131
"cell_type": "markdown",
130132
"metadata": {},
131133
"source": [
132-
"Now we partition our `iris` dataset into a training set and a pool of unlabeled examples. We first specify our training set consisting of 3 random examples. The remaining examples go to our \"unlabeled\" pool."
134+
"Now we partition our `iris` dataset into a training set $\\mathcal{L}$ and $\\mathcal{U}$. We first specify our training set $\\mathcal{L}$ consisting of 3 random examples. The remaining examples go to our \"unlabeled\" pool $\\mathcal{U}$."
133135
]
134136
},
135137
{
@@ -154,7 +156,7 @@
154156
"cell_type": "markdown",
155157
"metadata": {},
156158
"source": [
157-
"# Active learning with ranked batch mode sampling\n",
159+
"## Active learning with ranked batch mode sampling\n",
158160
"\n",
159161
"For the classification, we are going to use a simple k-nearest neighbors classifier."
160162
]
@@ -273,9 +275,9 @@
273275
"cell_type": "markdown",
274276
"metadata": {},
275277
"source": [
276-
"Now we update our model by batch-mode sampling our \"unlabeled\" dataset. We tune our classifier by allowing it to query at most 20 instances it hasn't seen before. To properly utilize batch-mode sampling, we allow our model to request three records per query (instead of 1) but subsequently only allow our model to make 6 queries. Under the hood, our classifier aims to balance the ideas behind uncertainty and dissimilarity in its choices.\n",
278+
"Now we Update our model by batch-mode sampling our \"unlabeled\" dataset $\\mathcal{U}$. We tune our classifier by allowing it to query at most 20 instances it hasn't seen before. To properly utilize batch-mode sampling, we allow our model to request three records per query (instead of 1) but subsequently only allow our model to make 6 queries. Under the hood, our classifier aims to balance the ideas behind uncertainty and dissimilarity in its choices.\n",
277279
"\n",
278-
"With each requested query, we remove that record from our pool and record our model's accuracy on the raw dataset."
280+
"With each requested query, we remove that record from our pool $\\mathcal{U}$ and record our model's accuracy on the raw dataset."
279281
]
280282
},
281283
{
@@ -328,7 +330,8 @@
328330
"source": [
329331
"## Evaluate our model's performance\n",
330332
"\n",
331-
"Here, we first plot the query iteration index against model accuracy. As you can see, our model is able to obtain an accuracy of ~0.90 within its first query, and isn't as susceptible to getting \"stuck\" with querying uninformative records from our unlabeled set."
333+
"Here, we first plot the query iteration index against model accuracy. As you can see, our model is able to obtain an accuracy of ~0.90 within its first query, and isn't as susceptible to getting \"stuck\" with querying uninformative records from our unlabeled set.\n",
334+
"To visualize the performance of our classifier, we also plot the correct and incorrect predictions on the full dataset."
332335
]
333336
},
334337
{

0 commit comments

Comments
 (0)