@@ -159,18 +159,18 @@ If you're working with really big arrays, try the 'lazy' option:
159159 nbytes: 3.6P; cbytes: 0; initialized: 0/1000000000
160160 mode: w; path: big.zarr
161161
162- See the [ persistence documentation]( PERSISTENCE.rst) for more details of the
163- file format.
162+ See the ` persistence documentation < PERSISTENCE.rst >`_ for more
163+ details of the file format.
164164
165165Tuning
166166------
167167
168- ``zarr `` is optimised for accessing and storing data in contiguous slices,
169- of the same size or larger than chunks. It is not and will never be
170- optimised for single item access.
168+ ``zarr `` is optimised for accessing and storing data in contiguous
169+ slices, of the same size or larger than chunks. It is not and probably
170+ never will be optimised for single item access.
171171
172- Chunks sizes >= 1M are generally good. Optimal chunk shape will depend on
173- the correlation structure in your data.
172+ Chunks sizes >= 1M are generally good. Optimal chunk shape will depend
173+ on the correlation structure in your data.
174174
175175``zarr `` is designed for use in parallel computations working
176176chunk-wise over data. Try it with `dask.array
@@ -179,12 +179,6 @@ multi-threaded, set zarr to use blosc in contextual mode::
179179
180180 >>> zarr.set_blosc_options(use_context=True)
181181
182- If using zarr in a single-threaded context, set zarr to use blosc in
183- non-contextual mode, which allows blosc to use multiple threads
184- internally::
185-
186- >>> zarr.set_blosc_options(use_context=False, nthreads=4)
187-
188182Acknowledgments
189183---------------
190184
0 commit comments