Francesco Tudisco

Associate Professor (Reader) in Machine Learning

School of Mathematics, The University of Edinburgh
The Maxwell Institute for Mathematical Sciences
School of Mathematics, Gran Sasso Science Institute JCMB, King’s Buildings, Edinburgh EH93FD UK
email: f dot tudisco at ed.ac.uk

Geometry-aware training of factorized layers in tensor Tucker format

Emanuele Zangrando, Steffen Schotthöfer, Jonas Kusch, Gianluca Ceruti, Francesco Tudisco,
Advances in Neural Information Processing Systems (NeurIPS), (2024)

Abstract

Reducing parameter redundancies in neural network architectures is crucial for achieving feasible computational and memory requirements during training and inference phases. Given its easy implementation and flexibility, one promising approach is layer factorization, which reshapes weight tensors into a matrix format and parameterizes them as the product of two small rank matrices. However, this approach typically requires an initial full-model warm-up phase, prior knowledge of a feasible rank, and it is sensitive to parameter initialization. In this work, we introduce a novel approach to train the factors of a Tucker decomposition of the weight tensors. Our training proposal proves to be optimal in locally approximating the original unfactorized dynamics independently of the initialization. Furthermore, the rank of each mode is dynamically updated during training. We provide a theoretical analysis of the algorithm, showing convergence, approximation and local descent guarantees. The method’s performance is further illustrated through a variety of experiments, showing remarkable training compression rates and comparable or even better performance than the full baseline and alternative layer factorization strategies.

Please cite this paper as:

@inproceedings{zangrando2024geometry,
  title={Geometry-aware training of factorized layers in tensor Tucker format},
  author={Zangrando, Emanuele and Schotth{\"o}fer, Steffen  and Kusch, Jonas and Ceruti, Gianluca and Tudisco, Francesco},
  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
  year={2024}
}

Links: arxiv

Keywords: deep learning neural networks convolutional networks low-rank pruning compression