Compressing BART models for resource-constrained operation June 6, 2022 by Amazon AWS Combination of distillation and distillation-aware quantization compresses BART model to 1/16th its size.Read More Previous Post Train machine learning models using Amazon Keyspaces as a data source Next Post Vision in the Making: Andrew Ng’s Startup Automates Factory Inspection