Francesco Tudisco

Associate Professor (Reader) in Machine Learning

School of Mathematics, The University of Edinburgh
The Maxwell Institute for Mathematical Sciences
School of Mathematics, Gran Sasso Science Institute JCMB, King’s Buildings, Edinburgh EH93FD UK
email: f dot tudisco at ed.ac.uk

New paper out

Subhomogeneous Deep Equilibrium Models

Abstract: Implicit-depth neural networks have grown as powerful alternatives to traditional networks in various applications in recent years. However, these models often lack guarantees of existence and uniqueness, raising stability, performance, and reproducibility issues. In this paper, we present a new analysis of the existence and uniqueness of fixed points for implicit-depth neural networks based on the concept of subhomogeneous operators and the nonlinear Perron-Frobenius theory. Compared to previous similar analyses, our theory allows for weaker assumptions on the parameter matrices, thus yielding a more flexible framework for well-defined implicit networks. ... Read more

Paper accepted @ AI4DiffEqtnsInSci ICLR Workshop

Our paper Mixture of Neural Operators: Incorporating Historical Information for Longer Rollouts has been accepted at ICLR 2024 Workshop on AI4DifferentialEquations In Science. Big congratulations to my student Harris on an excellent work.

Turing fellowship

So happy to share that I have been selected as part of this year Turing Fellows cohort! I look forward to an exciting time as Turing Fellow. Check the postings by University of Edinburgh, Bayes centre, Alan Turing Institute as well as LinkedIn’s post one and post two.


New paper out

Contractivity of neural ODEs: an eigenvalue optimization problem

Abstract: We propose a novel methodology to solve a key eigenvalue optimization problem which arises in the contractivity analysis of neural ODEs. When looking at contractivity properties of a one layer weight-tied neural ODE $\dot{u}(t)=σ(Au(t)+b)$ (with $u,b \in {\mathbb R}^n$, $A$ is a given $n \times n$ matrix, $σ: {\mathbb R} \to {\mathbb R}^+$ denotes an activation function and for a vector $z \in {\mathbb R}^n$, $σ(z) \in {\mathbb R}^n$ has to be interpreted entry-wise), we are led to study the logarithmic norm of a set of products of type $D A$, where $D$ is a diagonal matrix such that ${\mathrm{diag}}(D) \in σ'({\mathbb R}^n)$. ... Read more

New paper out

Neural rank collapse: Weight decay and small within-class variability yield low-rank bias

Abstract: Recent work in deep learning has shown strong empirical and theoretical evidence of an implicit low-rank bias: weight matrices in deep networks tend to be approximately low-rank and removing relatively small singular values during training or from available trained models may significantly reduce model size while maintaining or even improving model performance. However, the majority of the theoretical investigations around low-rank bias in neural networks deal with oversimplified deep linear networks. ... Read more

--- The rank of the weight matrix $W_{\ell}$ of layer $\ell$ trained with weight decay $\lambda$ decreases with the total class variability $\mathrm{TCV}$ of any latent space $X_k$, with $k<\ell$.

New paper out

Cholesky-like Preconditioner for Hodge Laplacians via Heavy Collapsible Subcomplex

Abstract: Techniques based on $k$-th order Hodge Laplacian operators $L_k$ are widely used to describe the topology as well as the governing dynamics of high-order systems modeled as simplicial complexes. In all of them, it is required to solve a number of least square problems with $L_k$ as coefficient matrix, for example in order to compute some portions of the spectrum or integrate the dynamical system. In this work, we introduce the notion of optimal collapsible subcomplex and we present a fast combinatorial algorithm for the computation of a sparse Cholesky-like preconditioner for $L_k$ that exploits the topological structure of the simplicial complex. ... Read more

New paper out

Collaboration and topic switches in science

Abstract: Collaboration is a key driver of science and innovation. Mainly motivated by the need to leverage different capacities and expertise to solve a scientific problem, collaboration is also an excellent source of information about the future behavior of scholars. In particular, it allows us to infer the likelihood that scientists choose future research directions via the intertwined mechanisms of selection and social influence. Here we thoroughly investigate the interplay between collaboration and topic switches. ... Read more

Call for PhD applications @ Maxwell Institute’s Graduate School

I am looking for new PhD students on the following three projects within the Maxwell Institute’s Graduate School at The University of Edinburgh:

  • Structured reduced-order deep learning for scientific and industrial applications
  • Modern numerical linear algebra techniques for efficient learning and optimization (co-supervised with John Pearson)
  • Stability of Artificial Intelligence Algorithms (co-supervised with Des Higham)

For more details and to apply: https://www.mac-migs.ac.uk/mac-migs-2024/
Deadline for applications is 22 January 2024. The start date of the PhD is September 2024 and the duration is 4 years. The first year is devoted to training, with several available courses and training activities (also in collaboration with industries). Successful applicants will receive a full-time scholarship for the entire duration of the program.

The Maxwell Institute for Mathematical Sciences brings together research activities in mathematical sciences at Edinburgh and Heriot-Watt Universities. The Institute has a physical home on the top floor of the Bayes Centre which it shares with the International Centre for Mathematical Sciences (ICMS), creating a hub for mathematical sciences research, training, and applications in central Edinburgh.

Paper accepted on ESAIM: M2AN

Our paper Optimizing network robustness via Krylov subspaces has been accepted on ESAIM Mathematical Modeling and Numerical Analysis. Fun collaboration with Stefano Massei! We provide efficient and accurate Krylov-based methods for optimizing the robustness on large networks. Code in Matlab availabe here.

Visiting MaLGa Machine Learning Genoa Center

Exciting research days ahead visiting MaLGa Machine Learning Genoa Center! I will also present our recent work on reducing model parameters in deep learning and low-rank bias at the ML seminar. Thanks Lorenzo Rosasco for the kind invitation!


New paper out

A nonlinear spectral core-periphery detection method for multiplex networks

Abstract: Core-periphery detection aims to separate the nodes of a complex network into two subsets: a core that is densely connected to the entire network and a periphery that is densely connected to the core but sparsely connected internally. The definition of core-periphery structure in multiplex networks that record different types of interactions between the same set of nodes but on different layers is nontrivial since a node may belong to the core in some layers and to the periphery in others. ... Read more

--- Our NSM vs multilayer degree on 2 layer Internet network with different noise levels

Paper accepted on EURO J Computational Optimization

Our paper Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by Coordinate Descent has been accepted on EURO Journal on Computational Optimization. We explore the computational advantage of randomized coordinate gradient methods for semi-supervised learning on higher-order graph models.

Paper accepted on NeurIPS 2023

Excited that our paper on Robust low-rank training has been accepted on NeurIPS 2023! We propose a method to train networks with low-rank weights while reducing the network condition number and thus increasing its robustness with respect to adversarial attacks. Congrats to my two PhD students Dayana Savostianova and Emanuele Zangrando!

New paper out

Learning the effective order of a hypergraph dynamical system

Abstract: Dynamical systems on hypergraphs can display a rich set of behaviours not observable for systems with pairwise interactions. Given a distributed dynamical system with a putative hypergraph structure, an interesting question is thus how much of this hypergraph structure is actually necessary to faithfully replicate the observed dynamical behaviour. To answer this question, we propose a method to determine the minimum order of a hypergraph necessary to approximate the corresponding dynamics accurately. ... Read more

New paper out

Robust low-rank training via approximate orthonormal constraints

Abstract: With the growth of model and data sizes, a broad effort has been made to design pruning techniques that reduce the resource demand of deep learning pipelines, while retaining model performance. In order to reduce both inference and training costs, a prominent line of work uses low-rank matrix factorizations to represent the network weights. Although able to retain accuracy, we observe that low-rank methods tend to compromise model robustness against adversarial perturbations. ... Read more

--- Evolution of loss, accuracy, and condition number for Lenet5 on MNIST dataset. The proposed approach (CondLR) converges faster while maintaining a well-conditioned neural network.

New paper out

Rank-adaptive spectral pruning of convolutional layers during training

Abstract: The computing cost and memory demand of deep learning pipelines have grown fast in recent years and thus a variety of pruning techniques have been developed to reduce model parameters. The majority of these techniques focus on reducing inference costs by pruning the network after a pass of full training. A smaller number of methods address the reduction of training costs, mostly based on compressing the network via low-rank layer factorizations. ... Read more

--- Comparison of vanilla compression approaches with different tensor formats with the proposed TDLRT method. Mean and standard deviation of 20 weight initializations are displayed. TDLRT achieves higher compression rates at higher accuracy with lower variance between initializations.

DLRA Workshop @ EPFL

Traveling today to EPF Lausanne for the DLRA “New Horizon” workshop. I will present our recent work on spectral pruning of deep learning models. Thanks Gianluca Ceruti and Jonas Kusch for the kind invitation!


SIAM SIGEST paper available online

Our SIAM Review’s SIGEST paper Nonlinear Perron-Frobenius theorem for nonnegative tensors has been published on Vol 65 Iss 2 of SIAM Review. Thanks to the Editors for the support and for the nice presentation!

Numerical Linear Algebra Days @ GSSI

I am co-organizing the 18th edition of the Numerical Algebra Days (2giorni) workshop at GSSI.

This is the 18th workshop in a series dedicated to Numerical Linear Algebra and Applications, aiming at gathering the (mostly Italian) Numerical Linear Algebra scientific community to discuss recent advances in the area and to promote the exchange of novel ideas and the collaboration among researchers.

Here are the slides of my lecture on Nonlinear Perron-Frobenius Theory


New paper out

Nonlinear Perron-Frobenius theorems for nonnegative tensors

Abstract: We present a unifying Perron–Frobenius theory for nonlinear spectral problems defined in terms of nonnegative tensors. By using the concept of tensor shape partition, our results include, as a special case, a wide variety of particular tensor spectral problems considered in the literature and can be applied to a broad set of problems involving tensors (and matrices), including the computation of operator norms, graph and hypergraph matching in computer vision, hypergraph spectral theory, higher-order network analysis, and multimarginal optimal transport. ... Read more

Paper accepted @ ICML 2023

I am very happy that our paper Learning the right layers: a data-driven layer-aggregation strategy for semi-supervised learning on multilayer graphs has been accepted on the proceedings of this year’s ICML conference.

Congrats to Sara Venturini on one more important achievement!

Joining the editorial board of CMM

I just accepted an invite to join the editorial board of the Springer’s journal Computational Mathematics and Modeling, a journal estabilished and run by the department of Computational Mathematics and Cybernetics of the Lomonosov Moscow State University, a place that is very important to me.

New paper out

A nonlinear model of opinion dynamics on networks with friction-inspired stubbornness

Abstract: The modeling of opinion dynamics has seen much study in varying academic disciplines. Understanding the complex ways information can be disseminated is a complicated problem for mathematicians as well as social scientists. Inspired by the Cucker-Smale system of flocking dynamics, we present a nonlinear model of opinion dynamics that utilizes an environmental averaging protocol similar to the DeGroot and Freidkin-Johnsen models. Indeed, the way opinions evolve is complex and nonlinear effects ought to be considered when modelling. ... Read more

Plenary talk at IC2S2 2023 conference

Excited that our work on Social Contagion in Science has been selected as one of the 16 plenary talks at the next International Conference on Computational Social Science in Copenhagen, out of 900+ submissions. I’m very proud of the great multidisciplinary team that has worked on this project, in particular Sara and Satyaki.

Additionally, our work on The COVID-19 research outbreak: how the pandemic culminated in a surge of new researchers has been accepted as oral presentation at the same conference. This is based on a fanstastic ongoing collaboration with Maia Majumder’s team at Harvard Medical School.

Three talks at NetSci 2023 conference

Three talks has been accepted at the next NetSci 2023 conference in Vienna:

  • Quantifying the homological stability of simplicial complexes, presented by Anton Savostianov
  • Learning the right layers: a data-driven layer-aggregation strategy for semi-supervised learning on multilayer graphs, presented by Sara Venturini
  • Social Contagion in Science, presented by Satyaki Sikdar

SIAM SIGEST Outstanding paper award

I am honored to receive the SIGEST Award of the Society for Industrial and Applied Mathematics (SIAM) for our paper A unifying Perron-Frobenius thoeorem for nonnegative tensors via mutlihomogeneous mappings.

SIGEST highlights a recent outstanding paper from one of SIAM’s specialized research journals, chosen on the basis of exceptional interest to the entire SIAM community. The winning paper is reprinted in the SIGEST section of SIAM Review, the flagship journal of the society. The SIGEST version of the paper has a new title

Nonlinear Perron-Frobenius theorems for nonnegative tensors

and contains two major additions:

  1. A widely extended introduction, with many non-trivial examples of tensor eigenvalue problems in applications, including problems from computer vision and optimal transport. Here we detail how the nonlinear Perron-Frobenius theorems for tensors that we introduced can be of great help to tackle these problems.

  2. A new nonlinear Perron-Frobenius theorem that significantly improves (parts of) the previous Perron-Frobenius theorems for tensors and allows us to address more general problems (such as the optimal transport problem discussed in the introduction)

New paper out

Optimizing network robustness via Krylov subspaces

Abstract: We consider the problem of attaining either the maximal increase or reduction of the robustness of a complex network by means of a bounded modification of a subset of the edge weights. We propose two novel strategies combining Krylov subspace approximations with a greedy scheme and with the limited-memory BFGS. The paper discuss the computational and modeling aspects of our methodology and illustrates the various optimization problems on networks that can be addressed within the proposed framework. ... Read more

Presenting today @ University of Rome TorVergata

I am presenting today my work on low-rank training of deep neural networks at the Rome Centre on Mathematics for Modelling and Data Science at the Department of Mathematics, University of Rome Tor Vergata (Italy).

New paper out

Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by Coordinate Descent

Abstract: Graph Semi-Supervised learning is an important data analysis tool, where given a graph and a set of labeled nodes, the aim is to infer the labels to the remaining unlabeled nodes. In this paper, we start by considering an optimization-based formulation of the problem for an undirected graph, and then we extend this formulation to multilayer hypergraphs. We solve the problem using different coordinate descent approaches and compare the results with the ones obtained by the classic gradient descent method. ... Read more

New paper out

Quantifying the structural stability of simplicial homology

Abstract: The homology groups of a simplicial complex reveal fundamental properties of the topology of the data or the system and the notion of topological stability naturally poses an important yet not fully investigated question. In the current work, we study the stability in terms of the smallest perturbation sufficient to change the dimensionality of the corresponding homology group. Such definition requires an appropriate weighting and normalizing procedure for the boundary operators acting on the Hodge algebra’s homology groups. ... Read more

--- Continuous and discretized manifold with a 1-dimensional hole.

Paper accepted on Applied and Computational Harmonic Analysis

Excited that our paper Nodal domain count for the generalized p-Laplacian – with Piero Deidda and Mario Putti – has been accepted for publication on Applied and Computational Harmonic Analysis. Among the main results, we prove that the eigenvalues of the p-Laplacian on a tree are all variational (and thus they are exactly n), we show that the number of nodal domains of the p-Laplacian for general graphs can be bound both from above and from below, and we deduce that the higher-order Cheeger inequality is tight on trees.

Arturo De Marinis from our team is attending XMaths workshop at University of Bari this week, presenting preliminary results on our work on stability of neural dynamical systems.

Data Science in Action at University of Padua

I am giving an invited lecture today on Low-parametric deep learning, at the Data Science in Action day organized by the University of Padua. You can find here the slides of my talk.

Presenting today @ Örebro University

I am presenting today my work on fast and efficient neural networks' training via low-rank gradient flows at the Research Seminars in Mathematics at the School of Science and Technology, Örebro University (Sweden). Thanks Andrii Dmytryshyn for the kind invitation!

Emanuele Zangrando is presenting today our work on dynamical low-rank training of artificial neural networks at the SCDM seminar at Karlsruhe Institute of Technology.

Presenting today @ Texas A&M University

I am presenting today my work on generalized $p$-Laplacian on graphs at the Mathematical Physics and Harmonic Analysis Seminar at Texas A&M Univeristy. Thanks Gregory Berkolaiko for the kind invitation!

Paper accepted on NeurIPS 2022

Thrilled to hear that our paper on Low-rank lottery tickets has been accepted on NeurIPS 2022! We propose a method to speed up and reduce the memory footprint of the training phase (as well as the inference phase) of fully-connected and convolutional NNs by interpreting the training process as a gradient flow and integrating the corresponding ODE directly on the manifold of low-rank matrices.
It has been a wonderful collaboration among a fantastic team of collaborators, and it would have not been possible without the excellent work of the two PhD students Emanuele Zangrando and Steffen Schotthöfer.


New paper out

Nonlinear Spectral Duality

Abstract: Nonlinear eigenvalue problems for pairs of homogeneous convex functions are particular nonlinear constrained optimization problems that arise in a variety of settings, including graph mining, machine learning, and network science. By considering different notions of duality transforms from both classical and recent convex geometry theory, in this work we show that one can move from the primal to the dual nonlinear eigenvalue formulation maintaining the spectrum, the variational spectrum as well as the corresponding multiplicities unchanged. ... Read more

Report on the XXI Householder Symposium

Our report on the XXI Householder Symposium on Numerical Linear Algebra appeared today on SIAM News. It has been a great meeting which I really enjoyed!

Paper accepted on Journal of Complex Networks

Excited that our paper A Variance-aware Multiobjective Louvain-like Method for Community Detection in Multiplex Networks – with Sara Venturini, Andrea Cristofari, Francesco Rinaldi – has been accepted for publication on the Journal of Complex Networks, Oxford academic press.

Paper accepted on European Journal of Applied Mathematics

Happy that our paper Hitting times for second-order random walks, joint work with Arianna Tonetto and Dario Fasino (Univ of Udine), has been accepted for publication on the European Journal of Applied Mathematics, Cambridge University Press.

Open postdoc position GSSI-SNS

We are looking for a postdoctoral research associate to join our group on a joint project with Michele Benzi from Scuola Normale Superiore in Pisa. The postdoctoral fellow will be working on topics at the interface between Numerical Methods and Machine Learning and will be funded by the MUR-Pro3 grant “STANDS - Numerical STAbility of Neural Dynamical Systems”. The official call for application will open up soon. For more details and in order to express your interest, please refer to this form.

New paper out

Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations

Abstract: Neural networks have achieved tremendous success in a large variety of applications. However, their memory footprint and computational demand can render them impractical in application settings with limited hardware or energy resources. In this work, we propose a novel algorithm to find efficient low-rank subnetworks. Remarkably, these subnetworks are determined and adapted already during the training phase and the overall time and memory resources required by both training and evaluating them is significantly reduced. ... Read more

--- By re-interpreting the weight-update phase as a time-continuous process we directly perform training within the manifold of low-rank matrices.

Presenting today @ USTC Hefei

I am presenting today my work on generalized $p$-Laplacian on graphs at the Spectral Geomtry Seminar at University of Science and Technology of China in Hefei. Thanks Shiping Liu for the kind invitation!

Paper accepted @ KDD 2022

One more great news! Our paper Core-periphery partitioning and quantum annealing – with Catherine Higham and Desmond Higham – has been accepted on the proceedings of this year’s ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

Paper accepted @ ICML 2022

I am very happy that our paper Nonlinear Feature Diffusion on Hypergraphs – with Austin Benson and Konstantin Prokopchik – has been accepted on the proceedings of this year’s ICML. Congrats to my student Konstantin for one more important achievement!

Numerical Methods for Compression and Learning

Nicola Guglielmi and I are orginzing this week the workhop Numerical Methods for Compression and Learning at GSSI. The workshop will take place in the Main Lecture Hall in the Orange Building and will feature lectures from invited speakers as well as poster sessions open to all participants.

Excited to host great colleagues and looking forward to exciting talks!

Online participation will be possible via the zoom link: https://us02web.zoom.us/j/83830006962?pwd=SmI1MTVKRTllU3dBR01Ybko5bzBJdz09

As the amount of available data is growing very fast, the importance of being able to handle and exploit very-large-scale data in an efficient and robust manner is becoming increasingly more relevant. This workshop aims at bringing together experts from signal processing, compressed sensing, low rank methods and machine learning with the goal of highlighting modern approaches as well as challenges in computational mathematics arising in all these areas and at their intersection.


News highlights

*/}}