Skip to content

Commit 490c03c

Browse files
committed
add changelog for v0.1.0 and v0.2.0
Keep-a-Changelog format with entries for the initial release, the v0.2 feature drop (new compression techniques and sensitivity analysis), and an Unreleased section tracking this stabilize-0.2 work.
1 parent 5234a84 commit 490c03c

1 file changed

Lines changed: 63 additions & 0 deletions

File tree

CHANGELOG.md

Lines changed: 63 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,63 @@
1+
# Changelog
2+
3+
All notable changes to Comprexx are documented in this file.
4+
5+
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
6+
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7+
8+
## [Unreleased]
9+
10+
### Added
11+
- GitHub Actions CI workflow running `pytest` on Python 3.10, 3.11, 3.12 plus a
12+
`ruff check` lint job.
13+
- `CHANGELOG.md` with history for v0.1.0 and v0.2.0.
14+
15+
### Changed
16+
- Silenced the noisy `torch.ao.quantization is deprecated` warning inside the
17+
PTQ dynamic and static stages. The underlying API is still used, with a
18+
`TODO(v0.3)` marking the upcoming migration to `torchao.quantization`.
19+
- Fixed the package `__version__` to report `0.2.0` instead of the stale
20+
`0.1.0` that shipped on PyPI.
21+
- Tightened the codebase against `ruff check` and added a per-file ignore
22+
for `E741` in tests.
23+
24+
## [0.2.0] - 2026-04-07
25+
26+
### Added
27+
- **Unstructured pruning** stage: magnitude or random element-wise pruning
28+
with global/local scope and optional gradual cubic schedule.
29+
- **N:M sparsity** stage: structured N-of-M sparsity (default 2:4) for
30+
NVIDIA Ampere sparse tensor cores.
31+
- **Weight-only quantization** stage: group-wise INT4/INT8 with symmetric
32+
or asymmetric scaling for Linear and Conv2d layers.
33+
- **Low-rank decomposition** stage: truncated SVD factorization of Linear
34+
layers, with fixed rank-ratio or energy-threshold selection modes.
35+
- **Operator fusion** stage: Conv2d + BatchNorm2d folding via `torch.fx`
36+
with graceful fallback on non-traceable models.
37+
- **Weight clustering** stage: per-layer k-means codebook clustering.
38+
- **`cx.analyze_sensitivity()`**: per-layer sensitivity probing via prune
39+
or noise perturbation. Returns a `SensitivityReport` that ranks layers
40+
by metric drop and can suggest `exclude_layers` above a threshold.
41+
- New techniques are wired through the recipe schema and loader, and
42+
exported from `comprexx.stages`.
43+
44+
### Tests
45+
- 163 passing (up from 91).
46+
47+
## [0.1.0] - 2026-04-06
48+
49+
Initial release.
50+
51+
### Added
52+
- Model analysis and profiling via `cx.analyze()`.
53+
- Structured pruning with L1/L2/random criteria and global/local scope.
54+
- Post-training dynamic and static INT8 quantization.
55+
- ONNX export with manifest and optional `onnxruntime` validation.
56+
- Recipe-driven pipelines (YAML) validated via Pydantic.
57+
- CLI commands: `comprexx analyze`, `compress`, `export`.
58+
- Accuracy guards with halt/warn actions.
59+
- Per-stage compression reports persisted under `comprexx_runs/`.
60+
61+
[Unreleased]: https://github.com/cachevector/comprexx/compare/v0.2.0...HEAD
62+
[0.2.0]: https://github.com/cachevector/comprexx/compare/v0.1.0...v0.2.0
63+
[0.1.0]: https://github.com/cachevector/comprexx/releases/tag/v0.1.0

0 commit comments

Comments
 (0)