All notable changes to this project will be documented in this file. The format is based on Keep a Changelog and this project adheres to Semantic Versioning.
- torchvision warning not shown on import of top level library (error message on usage remains)
- ability to load models pretrained on ImageNet
- Jupyter notebooks for demonstration purposes
- support for PyTorch version 2.0, Python 3.10
- package no longer depends on torchvision directly (only needed for pretrained models from model_hub)
- BEmbedding bag
- new models:
- simple example script for MNIST
- support for integration of bitorch's inference engine for the following layers
- QLinear
- QConv
- a quantized DLRM version, derived from this implementation
- example code for training the quantized DLRM model
- new quantization function: Progressive Sign
- new features in PyTorch Lightning example:
- training with Knowledge Distillation
- improved logging
- callback to update Progressive Sign module
- option to integrate custom models, datasets, quantization functions
- a quantization scheduler which lets you change quantization methods during training
- a padding layer
- requirements changed:
- code now depends on torch 1.12.x and torchvision 0.13.x
- requirements for examples are now stored at their respective folders
- optional requirements now install everything needed to run all examples
- code is now formatted with the black code formatter
- using PyTorch's implementation of RAdam
- renamed the
bitwidth
attribute of quantization functions tobit_width
- moved the image datasets out of the bitorch core package into the image classification example
- fix error from updated protobuf package
- automatic documentation generation using sphinx
- more documentation of layers and modules
- bit-width of quantization functions is now stored
- new layers:
- Pact activation function
- QEmbedding
- QEmbeddingBag
- fvbitcore support in the example scripts for flop and model size estimation on operation level
- image classification example:
- script now uses pytorch lightning
- it includes distributed training capability
- added wandb metric logging
- QConv layers can now be pickled
- Different quantized versions of LeNet available
- a bug where layer input and weight quantization functions could not be set using command line arguments
- a bug where modules could not be imported in OS that use different path separators than '/'
- make package compatible with python 3.7
- basic quantized layers
- QActivation
- QConv
- QLinear
- several debug layers
- resnet, lenet
- various quantization functions
- approxsign
- dorefa
- sign
- steheaviside
- swishsign
- support for cifar10 and mnist
- general training script for image classification
- result logger for csv and tensorboard
- checkpoint manager
- eta estimator
- experiment creator
- model visualization in logs and tensorboard
- config classes for bitorch layers / quantizations with automated argparse argument creation