School of Mathematics
Numerical Analysis and Data Science Group
GSSI Gran Sasso Science Institute
Viale Francesco Crispi 7 — 67100 — L’Aquila (Italy)
email: francesco dot tudisco at gssi dot it
Excited that our paper Nodal domain count for the generalized p-Laplacian – with Piero Deidda and Mario Putti – has been accepted for publication on Applied and Computational Harmonic Analysis. Among the main results, we prove that the eigenvalues of the p-Laplacian on a tree are all variational (and thus they are exactly n), we show that the number of nodal domains of the p-Laplacian for general graphs can be bound both from above and from below, and we deduce that the higher-order Cheeger inequality is tight on trees.
I am giving an invited lecture today on Low-parametric deep learning, at the Data Science in Action day organized by the University of Padua. You can find here the slides of my talk.
I am presenting today my work on fast and efficient neural networks' training via low-rank gradient flows at the Research Seminars in Mathematics at the School of Science and Technology, Örebro University (Sweden). Thanks Andrii Dmytryshyn for the kind invitation!
Steffen Schotthöfer and Emanuele Zangrando from our lab are attending NeurIPS Conference this week in person and will present our work on lowrank training and pruning of neural networks. In this work we developed a framework to perform stable and efficient training on low-rank manifolds, resulting in an order of magnitude less memory cost and training time! Tested successfully on Imagenet1K, transformers and several other benchmarks.
If you are there too, swing by our poster session in HallJ#604 on Wed 30 Nov 9:30 am PST
Emanuele Zangrando is presenting today our work on dynamical low-rank training of artificial neural networks at the SCDM seminar at Karlsruhe Institute of Technology.
I am presenting today my work on generalized $p$-Laplacian on graphs at the Mathematical Physics and Harmonic Analysis Seminar at Texas A&M Univeristy. Thanks Gregory Berkolaiko for the kind invitation!
Thrilled to hear that our paper on Low-rank lottery tickets has been accepted on NeurIPS 2022! We propose a method to speed up and reduce the memory footprint of the training phase (as well as the inference phase) of fully-connected and convolutional NNs by interpreting the training process as a gradient flow and integrating the corresponding ODE directly on the manifold of low-rank matrices.
It has been a wonderful collaboration among a fantastic team of collaborators, and it would have not been possible without the excellent work of the two PhD students Emanuele Zangrando and Steffen Schotthöfer.
Our report on the XXI Householder Symposium on Numerical Linear Algebra appeared today on SIAM News. It has been a great meeting which I really enjoyed!
Excited that our paper A Variance-aware Multiobjective Louvain-like Method for Community Detection in Multiplex Networks – with Sara Venturini, Andrea Cristofari, Francesco Rinaldi – has been accepted for publication on the Journal of Complex Networks, Oxford academic press.
Happy that our paper Hitting times for second-order random walks, joint work with Arianna Tonetto and Dario Fasino (Univ of Udine), has been accepted for publication on the European Journal of Applied Mathematics, Cambridge University Press.
We are looking for a
I am presenting today my work on generalized $p$-Laplacian on graphs at the Spectral Geomtry Seminar at University of Science and Technology of China in Hefei. Thanks Shiping Liu for the kind invitation!
One more great news! Our paper Core-periphery partitioning and quantum annealing – with Catherine Higham and Desmond Higham – has been accepted on the proceedings of this year’s ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.
I am very happy that our paper Nonlinear Feature Diffusion on Hypergraphs – with Austin Benson and Konstantin Prokopchik – has been accepted on the proceedings of this year’s ICML. Congrats to my student Konstantin for one more important achievement!
Nicola Guglielmi and I are orginzing this week the workhop Numerical Methods for Compression and Learning at GSSI. The workshop will take place in the Main Lecture Hall in the Orange Building and will feature lectures from invited speakers as well as poster sessions open to all participants.
Excited to host great colleagues and looking forward to exciting talks!
Online participation will be possible via the zoom link: https://us02web.zoom.us/j/83830006962?pwd=SmI1MTVKRTllU3dBR01Ybko5bzBJdz09
As the amount of available data is growing very fast, the importance of being able to handle and exploit very-large-scale data in an efficient and robust manner is becoming increasingly more relevant. This workshop aims at bringing together experts from signal processing, compressed sensing, low rank methods and machine learning with the goal of highlighting modern approaches as well as challenges in computational mathematics arising in all these areas and at their intersection.