Do Compressed LLMs Forget Knowledge? An Experimental Study with Practical Implications

This paper was accepted at the Machine Learning and Compression Workshop at NeurIPS 2024.
Compressing Large Language Models (LLMs) often leads to reduced performance, especially for knowledge-intensive tasks. In this work, we dive into how compression damages LLMs’ inherent knowledge and the possible remedies. We start by proposing two conjectures on the nature of the damage: one is certain knowledge being forgotten (or erased) after LLM compression, hence necessitating the compressed model to (re)learn from data with additional parameters; the other presumes that knowledge is internally…Apple Machine Learning Research