Skip to content

Commit 0c09592

Browse files
authored
Merge pull request #308 from ocefpaf/pin_jupyterbook
Pin jupyterbook for now
2 parents 173145c + ca30d81 commit 0c09592

11 files changed

Lines changed: 26 additions & 30 deletions

.github/workflows/deploy-docs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ jobs:
3232
run: >
3333
set -e
3434
&& pip install jupyter-book
35-
&& jupyter-book build jupyterbook
35+
&& jupyter-book build "jupyter-book<2"
3636
3737
- name: GitHub Pages action
3838
if: success() && github.event_name == 'release'

.github/workflows/docs-linkchecker.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,5 +26,5 @@ jobs:
2626
- name: Linkcheck
2727
run: >
2828
set -e
29-
&& pip install jupyter-book
29+
&& pip install "jupyter-book<2"
3030
&& jupyter-book build jupyterbook --builder linkcheck

.github/workflows/test-env.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ jobs:
1212
strategy:
1313
matrix:
1414
# macos-latest is osx-arm64 and the env is not building there yet b/c of robis.
15-
os: [ macos-13, ubuntu-latest, windows-latest ]
15+
os: [ macos-15-intel, ubuntu-latest, windows-latest ]
1616
fail-fast: false
1717
defaults:
1818
run:

.pre-commit-config.yaml

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ repos:
2323
exclude: "_templates/layout.html"
2424

2525
- repo: https://github.com/psf/black-pre-commit-mirror
26-
rev: 25.12.0
26+
rev: 26.1.0
2727
hooks:
2828
- id: black
2929
language_version: python3
@@ -34,7 +34,7 @@ repos:
3434
- id: add-trailing-comma
3535

3636
- repo: https://github.com/astral-sh/ruff-pre-commit
37-
rev: v0.14.10
37+
rev: v0.15.0
3838
hooks:
3939
- id: ruff
4040

@@ -55,7 +55,7 @@ repos:
5555
- id: nb-strip-paths
5656

5757
- repo: https://github.com/woodruffw/zizmor-pre-commit
58-
rev: v1.19.0
58+
rev: v1.22.0
5959
hooks:
6060
- id: zizmor
6161

jupyterbook/content/code_gallery/data_access_notebooks/2018-02-20-obis.ipynb

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,8 @@
4848
"\n",
4949
"Created: 2018-02-20\n",
5050
"\n",
51+
"Modified: 2026-02-04\n",
52+
"\n",
5153
"The [Ocean Biogeographic Information System (OBIS)](https://www.obis.org/) is an open-access data and information system for marine biodiversity for science, conservation and sustainable development.\n",
5254
"\n",
5355
"In this example we will use R libraries [`obistools`](https://iobis.github.io/obistools) and [`robis`](https://iobis.github.io/robis) to search data regarding marine turtles occurrence in the South Atlantic Ocean.\n",
@@ -212,7 +214,7 @@
212214
"\n",
213215
"Now let us try to obtain the occurrence data for the South Atlantic. We will need a vector geometry for the ocean basin in the [well-known test (WKT)](https://en.wikipedia.org/wiki/Well-known_text) format to feed into the `robis` `occurrence` function.\n",
214216
"\n",
215-
"In this example we converted a South Atlantic shapefile to WKT with geopandas, but one can also obtain geometries by simply drawing them on a map with [iobis maptool](https://obis.org/maptool)."
217+
"In this example we converted a South Atlantic shapefile to WKT with geopandas, but one can also obtain geometries by simply drawing them on a map with [iobis maptool](https://maptool.obis.org)."
216218
]
217219
},
218220
{
@@ -1070,7 +1072,7 @@
10701072
"name": "python",
10711073
"nbconvert_exporter": "python",
10721074
"pygments_lexer": "ipython3",
1073-
"version": "3.11.5"
1075+
"version": "3.14.2"
10741076
}
10751077
},
10761078
"nbformat": 4,

jupyterbook/content/code_gallery/data_access_notebooks/2022-11-23_pyobis_example.ipynb

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -48,15 +48,13 @@
4848
"\n",
4949
"Created: 2022-11-23\n",
5050
"\n",
51-
"Updated: 2023-03-24\n",
51+
"Updated: 2026-02-04\n",
5252
"\n",
5353
"\n",
5454
"Author: [Mathew Biddle](mailto:mathew.biddle@noaa.gov)\n",
5555
"\n",
5656
"This notebook uses the [pyobis](https://github.com/iobis/pyobis) Python package to query the [OBIS API](https://api.obis.org/) for datasets associated with projects funded under the United States Marine Biodiversity Observation Network. The notebook walks through the process for querying the OBIS api for a specific institution, then using the resultant datasets to gather the locations of all the occurrences using the pyobis package.\n",
5757
"\n",
58-
"![image.png](https://marinebon.org/wp-content/uploads/2022/08/MBON_logo_horizontal_60.png)\n",
59-
"\n",
6058
"The [US Marine Biodiversity Observation Network (US MBON)](https://ioos.noaa.gov/project/mbon/) is an interagency initiative that seeks to coordinate across sectors and government to characterize biodiversity and understand drivers of change. US MBON represents a broad, collaborative effort to address the need for systematic collection and sharing of marine life information, ensure that information is available for decision-making and management from local to national levels, and document marine biodiversity status and trends in the face of human- and climate-induced change using a range of technologies and approaches. Through the National Oceanographic Partnership Program, NOAA, NASA, Office of Naval Research, and BOEM have invested in US MBON since 2014, most recently announcing new five year projects in 2022.\n",
6159
"\n"
6260
]
@@ -179,7 +177,7 @@
179177
"source": [
180178
"Well that looks like the institution we're after! \n",
181179
"\n",
182-
"Using the `id` we can check it out on the OBIS website: https://obis.org/institute/23070\n",
180+
"Using the `id` we can check it out on the OBIS website: https://obis.org/organization/23070\n",
183181
"\n",
184182
"Yes, that does look like what we want. Now let's use that `id` to query OBIS for all associated datasets."
185183
]
@@ -1176,7 +1174,7 @@
11761174
"cell_type": "markdown",
11771175
"metadata": {},
11781176
"source": [
1179-
"## Let's explore those points a little more with [geopandas.GeoDataFrame.explore()](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.explore.html).\n",
1177+
"Let's explore those points a little more with [geopandas.GeoDataFrame.explore()](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.explore.html).\n",
11801178
"\n",
11811179
"This allows you to create an interactive map based on folium/leaflet.jsInteractive map based on GeoPandas and folium/leaflet.js"
11821180
]
@@ -2009,7 +2007,7 @@
20092007
"name": "python",
20102008
"nbconvert_exporter": "python",
20112009
"pygments_lexer": "ipython3",
2012-
"version": "3.12.5"
2010+
"version": "3.14.2"
20132011
}
20142012
},
20152013
"nbformat": 4,

jupyterbook/content/code_gallery/data_access_notebooks/2024-09-17-CKAN_API_Query.ipynb

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@
6060
"source": [
6161
"Created: 2024-09-17\n",
6262
"\n",
63-
"Updated: 2025-03-06\n",
63+
"Updated: 2026-02-04\n",
6464
"\n",
6565
"Author: [Mathew Biddle](mailto:mathew.biddle@noaa.gov)"
6666
]
@@ -71,9 +71,9 @@
7171
"id": "Dl6UQcydrdtx"
7272
},
7373
"source": [
74-
"In this notebook we highlight the ability to search the [IOOS Data Catalog](https://data.ioos.us/) for a specific subset of observations using the [CKAN](https://ckan.org/) web accessible Application Programming Interface (API). \n",
74+
"In this notebook we highlight the ability to search the IOOS Data Catalog for a specific subset of observations using the [CKAN](https://ckan.org/) web accessible Application Programming Interface (API). \n",
7575
"\n",
76-
"For this example, we want to look for observations of oxygen in the water column across the IOOS Catalog. As part of the [IOOS Metadata Profile](https://ioos.github.io/ioos-metadata/), which the US IOOS community uses to publish datasets, we know that each Regional Association and DAC will be following the [Climate and Forecast (CF) Conventions](http://cfconventions.org/) and using CF `standard_names` to describe their datasets. So, with that assumption, we can search across the IOOS Data catalog for datasets with the CF standard names that contain `oxygen` and `sea_water`. Then, we can build a simple map to show the geographical distribution of those datasets."
76+
"For this example, we want to look for observations of oxygen in the water column across the IOOS Catalog. As part of the [IOOS Metadata Profile](https://ioos.github.io/ioos-metadata/), which the US IOOS community uses to publish datasets, we know that each Regional Association and DAC will be following the [Climate and Forecast (CF) Conventions](https://cfconventions.org/) and using CF `standard_names` to describe their datasets. So, with that assumption, we can search across the IOOS Data catalog for datasets with the CF standard names that contain `oxygen` and `sea_water`. Then, we can build a simple map to show the geographical distribution of those datasets."
7777
]
7878
},
7979
{
@@ -1568,13 +1568,7 @@
15681568
"stamina.retry_scheduled\n",
15691569
" 71%|██████████████████████████████████████████████████████████████████████████████████████████████▏ | 5417/7654 [07:47<39:10, 1.05s/it]stamina.retry_scheduled\n",
15701570
"stamina.retry_scheduled\n",
1571-
" 71%|██████████████████████████████████████████████████████████████████████████████████████████████▏ | 5419/7654 [07:48<32:11, 1.16it/s]stamina.retry_scheduled\n"
1572-
]
1573-
},
1574-
{
1575-
"name": "stderr",
1576-
"output_type": "stream",
1577-
"text": [
1571+
" 71%|██████████████████████████████████████████████████████████████████████████████████████████████▏ | 5419/7654 [07:48<32:11, 1.16it/s]stamina.retry_scheduled\n",
15781572
" 71%|██████████████████████████████████████████████████████████████████████████████████████████████▏ | 5420/7654 [07:49<34:28, 1.08it/s]stamina.retry_scheduled\n",
15791573
"stamina.retry_scheduled\n",
15801574
"stamina.retry_scheduled\n",
@@ -9169,7 +9163,7 @@
91699163
"name": "python",
91709164
"nbconvert_exporter": "python",
91719165
"pygments_lexer": "ipython3",
9172-
"version": "3.13.1"
9166+
"version": "3.14.2"
91739167
}
91749168
},
91759169
"nbformat": 4,

jupyterbook/content/code_gallery/data_management_notebooks/2017-05-14-running_compliance_checker.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -249,7 +249,7 @@
249249
"name": "python",
250250
"nbconvert_exporter": "python",
251251
"pygments_lexer": "ipython3",
252-
"version": "3.11.5"
252+
"version": "3.14.2"
253253
}
254254
},
255255
"nbformat": 4,

jupyterbook/content/code_gallery/data_management_notebooks/2017-11-01-Creating-Archives-Using-Bagit.ipynb

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,8 @@
4848
"\n",
4949
"Created: 2017-11-01\n",
5050
"\n",
51+
"Modified: 2026-02-04\n",
52+
"\n",
5153
"[`BagIt`](https://en.wikipedia.org/wiki/BagIt) is a packaging format that supports storage of arbitrary digital content. The \"bag\" consists of arbitrary content and \"tags,\" the metadata files. `BagIt` packages can be used to facilitate data sharing with federal archive centers - thus ensuring digital preservation of oceanographic datasets within IOOS and its regional associations. NOAA NCEI supports reading from a Web Accessible Folder (WAF) containing bagit archives. For an example please see: http://ncei.axiomdatascience.com/cencoos/\n",
5254
"\n",
5355
"On this notebook we will use the [python interface](http://libraryofcongress.github.io/bagit-python) for `BagIt` to create a \"bag\" of a time-series profile data. First let us load our data from a comma separated values file (`CSV`)."
@@ -190,7 +192,7 @@
190192
"cell_type": "markdown",
191193
"metadata": {},
192194
"source": [
193-
"Now we can create a [Orthogonal Multidimensional Timeseries Profile](http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#_orthogonal_multidimensional_array_representation_of_time_series) object..."
195+
"Now we can create a [Orthogonal Multidimensional Timeseries Profile](https://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#_orthogonal_multidimensional_array_representation_of_time_series) object."
194196
]
195197
},
196198
{
@@ -442,7 +444,7 @@
442444
"name": "python",
443445
"nbconvert_exporter": "python",
444446
"pygments_lexer": "ipython3",
445-
"version": "3.9.7"
447+
"version": "3.14.2"
446448
}
447449
},
448450
"nbformat": 4,

jupyterbook/content/code_gallery/data_management_notebooks/2023-03-20-Reading_and_writing_zarr.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@
7070
"In this example we will load an ocean model data, stored as netCDF and served via THREDDS, subset it and save as zarr. Let's start by saving a single time step for the surface layer temperature and salinity.\n",
7171
"\n",
7272
"\n",
73-
"\\* Many data formats can take advantage of storing the data in chunks for faster access, the zarr approach is different in that each chunk is a different object in cloud storage, making them better for parallel access. The chunks can be compressed to reduce their size and improve cloud performance even further. Zarr has a nice tutorial on how to balance chunk size for performance. Check it out: https://zarr.readthedocs.io/en/stable/user-guide/performance.html#chunk-optimizations."
73+
"\\* Many data formats can take advantage of storing the data in chunks for faster access, the zarr approach is different in that each chunk is a different object in cloud storage, making them better for parallel access. The chunks can be compressed to reduce their size and improve cloud performance even further. Zarr has a nice tutorial on how to balance chunk size for performance. Check it out: https://zarr.readthedocs.io/en/stable/user-guide/performance/."
7474
]
7575
},
7676
{

0 commit comments

Comments
 (0)