Francesco Tudisco

Paper accepted @ ICML 2023

I am very happy that our paper Learning the right layers: a data-driven layer-aggregation strategy for semi-supervised learning on multilayer graphs has been accepted on the proceedings of this year’s ICML conference.

Congrats to Sara Venturini on one more important achievement!

Joining the editorial board of CMM

I just accepted an invite to join the editorial board of the Springer’s journal Computational Mathematics and Modeling, a journal estabilished and run by the department of Computational Mathematics and Cybernetics of the Lomonosov Moscow State University, a place that is very important to me.

New paper out

A nonlinear model of opinion dynamics on networks with friction-inspired stubbornness

Abstract: The modeling of opinion dynamics has seen much study in varying academic disciplines. Understanding the complex ways information can be disseminated is a complicated problem for mathematicians as well as social scientists. Inspired by the Cucker-Smale system of flocking dynamics, we present a nonlinear model of opinion dynamics that utilizes an environmental averaging protocol similar to the DeGroot and Freidkin-Johnsen models. Indeed, the way opinions evolve is complex and nonlinear effects ought to be considered when modelling. ... Read more

Plenary talk at IC2S2 2023 conference

Excited that our work on Social Contagion in Science has been selected as one of the 16 plenary talks at the next International Conference on Computational Social Science in Copenhagen, out of 900+ submissions. I’m very proud of the great multidisciplinary team that has worked on this project, in particular Sara and Satyaki.

Additionally, our work on The COVID-19 research outbreak: how the pandemic culminated in a surge of new researchers has been accepted as oral presentation at the same conference. This is based on a fanstastic ongoing collaboration with Maia Majumder’s team at Harvard Medical School.

Three talks at NetSci 2023 conference

Three talks has been accepted at the next NetSci 2023 conference in Vienna:

  • Quantifying the homological stability of simplicial complexes, presented by Anton Savostianov
  • Learning the right layers: a data-driven layer-aggregation strategy for semi-supervised learning on multilayer graphs, presented by Sara Venturini
  • Social Contagion in Science, presented by Satyaki Sikdar

SIAM SIGEST Outstanding paper award

I am honored to receive the SIGEST Award of the Society for Industrial and Applied Mathematics (SIAM) for our paper A unifying Perron-Frobenius thoeorem for nonnegative tensors via mutlihomogeneous mappings.

SIGEST highlights a recent outstanding paper from one of SIAM’s specialized research journals, chosen on the basis of exceptional interest to the entire SIAM community. The winning paper is reprinted in the SIGEST section of SIAM Review, the flagship journal of the society. The SIGEST version of the paper has a new title

Nonlinear Perron-Frobenius theorems for nonnegative tensors

and contains two major additions:

  1. A widely extended introduction, with many non-trivial examples of tensor eigenvalue problems in applications, including problems from computer vision and optimal transport. Here we detail how the nonlinear Perron-Frobenius theorems for tensors that we introduced can be of great help to tackle these problems.

  2. A new nonlinear Perron-Frobenius theorem that significantly improves (parts of) the previous Perron-Frobenius theorems for tensors and allows us to address more general problems (such as the optimal transport problem discussed in the introduction)

New paper out

Optimizing network robustness via Krylov subspaces

Abstract: We consider the problem of attaining either the maximal increase or reduction of the robustness of a complex network by means of a bounded modification of a subset of the edge weights. We propose two novel strategies combining Krylov subspace approximations with a greedy scheme and with the limited-memory BFGS. The paper discuss the computational and modeling aspects of our methodology and illustrates the various optimization problems on networks that can be addressed within the proposed framework. ... Read more

Presenting today @ University of Rome TorVergata

I am presenting today my work on low-rank training of deep neural networks at the Rome Centre on Mathematics for Modelling and Data Science at the Department of Mathematics, University of Rome Tor Vergata (Italy).

New paper out

Laplacian-based Semi-Supervised Learning in Multilayer Hypergraphs by Coordinate Descent

Abstract: Graph Semi-Supervised learning is an important data analysis tool, where given a graph and a set of labeled nodes, the aim is to infer the labels to the remaining unlabeled nodes. In this paper, we start by considering an optimization-based formulation of the problem for an undirected graph, and then we extend this formulation to multilayer hypergraphs. We solve the problem using different coordinate descent approaches and compare the results with the ones obtained by the classic gradient descent method. ... Read more

New paper out

Quantifying the structural stability of simplicial homology

Abstract: The homology groups of a simplicial complex reveal fundamental properties of the topology of the data or the system and the notion of topological stability naturally poses an important yet not fully investigated question. In the current work, we study the stability in terms of the smallest perturbation sufficient to change the dimensionality of the corresponding homology group. Such definition requires an appropriate weighting and normalizing procedure for the boundary operators acting on the Hodge algebra’s homology groups. ... Read more

--- Continuous and discretized manifold with a 1-dimensional hole.

Paper accepted on Applied and Computational Harmonic Analysis

Excited that our paper Nodal domain count for the generalized p-Laplacian – with Piero Deidda and Mario Putti – has been accepted for publication on Applied and Computational Harmonic Analysis. Among the main results, we prove that the eigenvalues of the p-Laplacian on a tree are all variational (and thus they are exactly n), we show that the number of nodal domains of the p-Laplacian for general graphs can be bound both from above and from below, and we deduce that the higher-order Cheeger inequality is tight on trees.

Arturo De Marinis from our team is attending XMaths workshop at University of Bari this week, presenting preliminary results on our work on stability of neural dynamical systems.

Data Science in Action at University of Padua

I am giving an invited lecture today on Low-parametric deep learning, at the Data Science in Action day organized by the University of Padua. You can find here the slides of my talk.

Presenting today @ Örebro University

I am presenting today my work on fast and efficient neural networks' training via low-rank gradient flows at the Research Seminars in Mathematics at the School of Science and Technology, Örebro University (Sweden). Thanks Andrii Dmytryshyn for the kind invitation!

Emanuele Zangrando is presenting today our work on dynamical low-rank training of artificial neural networks at the SCDM seminar at Karlsruhe Institute of Technology.

Presenting today @ Texas A&M University

I am presenting today my work on generalized $p$-Laplacian on graphs at the Mathematical Physics and Harmonic Analysis Seminar at Texas A&M Univeristy. Thanks Gregory Berkolaiko for the kind invitation!

Paper accepted on NeurIPS 2022

Thrilled to hear that our paper on Low-rank lottery tickets has been accepted on NeurIPS 2022! We propose a method to speed up and reduce the memory footprint of the training phase (as well as the inference phase) of fully-connected and convolutional NNs by interpreting the training process as a gradient flow and integrating the corresponding ODE directly on the manifold of low-rank matrices.
It has been a wonderful collaboration among a fantastic team of collaborators, and it would have not been possible without the excellent work of the two PhD students Emanuele Zangrando and Steffen Schotthöfer.


New paper out

Nonlinear Spectral Duality

Abstract: Nonlinear eigenvalue problems for pairs of homogeneous convex functions are particular nonlinear constrained optimization problems that arise in a variety of settings, including graph mining, machine learning, and network science. By considering different notions of duality transforms from both classical and recent convex geometry theory, in this work we show that one can move from the primal to the dual nonlinear eigenvalue formulation maintaining the spectrum, the variational spectrum as well as the corresponding multiplicities unchanged. ... Read more

Report on the XXI Householder Symposium

Our report on the XXI Householder Symposium on Numerical Linear Algebra appeared today on SIAM News. It has been a great meeting which I really enjoyed!

Paper accepted on Journal of Complex Networks

Excited that our paper A Variance-aware Multiobjective Louvain-like Method for Community Detection in Multiplex Networks – with Sara Venturini, Andrea Cristofari, Francesco Rinaldi – has been accepted for publication on the Journal of Complex Networks, Oxford academic press.

Paper accepted on European Journal of Applied Mathematics

Happy that our paper Hitting times for second-order random walks, joint work with Arianna Tonetto and Dario Fasino (Univ of Udine), has been accepted for publication on the European Journal of Applied Mathematics, Cambridge University Press.

Open postdoc position GSSI-SNS

We are looking for a postdoctoral research associate to join our group on a joint project with Michele Benzi from Scuola Normale Superiore in Pisa. The postdoctoral fellow will be working on topics at the interface between Numerical Methods and Machine Learning and will be funded by the MUR-Pro3 grant “STANDS - Numerical STAbility of Neural Dynamical Systems”. The official call for application will open up soon. For more details and in order to express your interest, please refer to this form.

New paper out

Low-rank lottery tickets: finding efficient low-rank neural networks via matrix differential equations

Abstract: Neural networks have achieved tremendous success in a large variety of applications. However, their memory footprint and computational demand can render them impractical in application settings with limited hardware or energy resources. In this work, we propose a novel algorithm to find efficient low-rank subnetworks. Remarkably, these subnetworks are determined and adapted already during the training phase and the overall time and memory resources required by both training and evaluating them is significantly reduced. ... Read more

--- By re-interpreting the weight-update phase as a time-continuous process we directly perform training within the manifold of low-rank matrices.

Presenting today @ USTC Hefei

I am presenting today my work on generalized $p$-Laplacian on graphs at the Spectral Geomtry Seminar at University of Science and Technology of China in Hefei. Thanks Shiping Liu for the kind invitation!

Paper accepted @ KDD 2022

One more great news! Our paper Core-periphery partitioning and quantum annealing – with Catherine Higham and Desmond Higham – has been accepted on the proceedings of this year’s ACM SIGKDD International Conference on Knowledge Discovery and Data Mining.

Paper accepted @ ICML 2022

I am very happy that our paper Nonlinear Feature Diffusion on Hypergraphs – with Austin Benson and Konstantin Prokopchik – has been accepted on the proceedings of this year’s ICML. Congrats to my student Konstantin for one more important achievement!

Numerical Methods for Compression and Learning

Nicola Guglielmi and I are orginzing this week the workhop Numerical Methods for Compression and Learning at GSSI. The workshop will take place in the Main Lecture Hall in the Orange Building and will feature lectures from invited speakers as well as poster sessions open to all participants.

Excited to host great colleagues and looking forward to exciting talks!

Online participation will be possible via the zoom link: https://us02web.zoom.us/j/83830006962?pwd=SmI1MTVKRTllU3dBR01Ybko5bzBJdz09

As the amount of available data is growing very fast, the importance of being able to handle and exploit very-large-scale data in an efficient and robust manner is becoming increasingly more relevant. This workshop aims at bringing together experts from signal processing, compressed sensing, low rank methods and machine learning with the goal of highlighting modern approaches as well as challenges in computational mathematics arising in all these areas and at their intersection.


New paper out

A Variance-aware Multiobjective Louvain-like Method for Community Detection in Multiplex Networks

Abstract: In this paper, we focus on the community detection problem in multiplex networks, i.e., networks with multiple layers having same node sets and no inter-layer connections. In particular, we look for groups of nodes that can be recognized as communities consistently across the layers. To this end, we propose a new approach that generalizes the Louvain method by (a) simultaneously updating average and variance of the modularity scores across the layers, and (b) reformulating the greedy search procedure in terms of a filter-based multiobjective optimization scheme. ... Read more

New paper out

Learning the right layers: a data-driven layer-aggregation strategy for semi-supervised learning on multilayer graphs

Abstract: Clustering (or community detection) on multilayer graphs poses several additional complications with respect to standard graphs as different layers may be characterized by different structures and types of information. One of the major challenges is to establish the extent to which each layer contributes to the cluster assignment in order to effectively take advantage of the multilayer structure and improve upon the classification obtained using the individual layers or their union. ... Read more

Talk @ Calcolo Scientifico e Modelli Matematici (CSMM) workshop

Presenting today in Rome at the third workshop “Calcolo Scientifico e Modelli Matematici”. The streaming of my talk (in Italian) is available here and here you can find the pdf of my slides.

New paper out

Core-periphery detection in hypergraphs

Abstract: Core-periphery detection is a key task in exploratory network analysis where one aims to find a core, a set of nodes well-connected internally and with the periphery, and a periphery, a set of nodes connected only (or mostly) with the core. In this work we propose a model of core-periphery for higher-order networks modeled as hypergraphs and we propose a method for computing a core-score vector that quantifies how close each node is to the core. ... Read more

--- Core nodes on a ‘hyperplane’ hypergraph, color-coded according our hypergraph-based vs clique-expansion-based cp detection methods

Gianluca Ceruti from EPFL is visiting our group until Feb 18. Looking forward to some inspiring discussion on dynamical low rank!

Traveling to Naples for the 2ggALN

I traveling today to Naples to take part to the Italian annual workshop on Numerical Linear Algebra (2ggALN), one of the nicest events of the year. Looking forward to see colleagues and friends and to meet new people!

Presenting today @ MPI Leipzig

I am presenting today my work on nonlinear Laplacians for hypergraphs with applications to centrality and core-periphery detection at the Networks Seminar at MPI for Mathematics in the Sciences Leipzig. Thanks Raffella Mulas and Leo Torres for the kind invitation!

STANDS pro3 grant

I am honored I have been awarded a MUR-Pro3 grant on Stability of Neural Dynamical Systems (STANDS), in collaboration with Michele Benzi (SNS) and Nicola Guglielmi (GSSI). I am looking for a postdoc to join our group on this project. Please reach out if you are interested!

New paper out

Core-periphery partitioning and quantum annealing

Abstract: We propose a new kernel that quantifies success for the task of computing a core-periphery partition for an undirected network. Finding the associated optimal partitioning may be expressed in the form of a quadratic unconstrained binary optimization (QUBO) problem, to which a state-of-the-art quantum annealer may be applied. We therefore make use of the new objective function to (a) judge the performance of a quantum annealer, and (b) compare this approach with existing heuristic core-periphery partitioning methods. ... Read more

New paper out

Nodal domain count for the generalized graph $p$-Laplacian

Abstract: Inspired by the linear Schrödinger operator, we consider a generalized $p$-Laplacian operator on discrete graphs and present new results that characterize several spectral properties of this operator with particular attention to the nodal domain count of its eigenfunctions. Just like the one-dimensional continuous $p$-Laplacian, we prove that the variational spectrum of the discrete generalized $p$-Laplacian on forests is the entire spectrum. Moreover, we show how to transfer Weyl’s inequalities for the Laplacian operator to the nonlinear case and prove new upper and lower bounds on the number of nodal domains of every eigenfunction of the generalized $p$-Laplacian on generic graphs, including variational eigenpairs. ... Read more

COMPiLE Reading Group

We are starting this week a reading group on Computational Learning, which will concentrate on papers in the general area of Numerics/Optimization/Matrix Theory/Graph Theory with particular emphasis on their application to / connection with Machine Learning, Data Mining and Network Science. The reading group is organized by PhD students and postdocs from the Numerics and Data Sciene group at GSSI. The organizer that is kicking off the initiative is Dayana Savostianova, who will also present the first paper. If you are interested in participating and/or presenting, please fill out this form (available to GSSI users only) to be notified about the meetings and to be kept in the loop.

New paper out

The self-consistent field iteration for p-spectral clustering

Abstract: The self-consistent field (SCF) iteration, combined with its variants, is one of the most widely used algorithms in quantum chemistry. We propose a procedure to adapt the SCF iteration for the p-Laplacian eigenproblem, which is an important problem in the field of unsupervised learning. We formulate the p-Laplacian eigenproblem as a type of nonlinear eigenproblem with one eigenvector nonlinearity , which then allows us to adapt the SCF iteration for its solution after the application of suitable regularization techniques. ... Read more

Talk @ Numerical Methods and Scientific Computing (NMSC) conference

Presenting today at the Numerical Methods and Scientific Computing conference that is taking place this week in Luminy at the CIRM conference center. This meeting is dedicated to Claude Brezinski for his 80th birthday, to whom I make my very best wishes. Thanks to the whole organizing comitte for the kind invitation! Below you can find the title of my talk, a link to the abstract as well as the slides of the presentation (in case you wish to have a look at them)


Title: Optimal L-shaped matrix reordering via nonlinear matrix eigenvectors

Abstract: We are interested in finding a permutation of the entries of a given square matrix $A$, so that the maximum number of its nonzero entries are moved to one of the corners in a L-shaped fashion.
If we interpret the nonzero entries of the matrix as the edges of a graph, this problem boils down to the so-called core–periphery structure, consisting of two sets: the core, a set of nodes that is highly connected across the whole graph, and the periphery, a set of nodes that is well connected only to the nodes that are in the core.
Matrix reordering problems have applications in sparse factorizations and preconditioning, while revealing core–periphery structures in networks has applications in economic, social and communication networks. This optimal reordering problem is a hard combinatorial optimization problem, which we relax into the continuous problem:

\begin{equation}\label{eq:1ft} \max f(x) := \sum_{ij}|A_{ij}| \max\{x_i,x_j\} \qquad \mathrm{s.t.} \qquad \|x\|=1 \end{equation}

While $f$ is still highly nonconvex and thus hardly treatable, we show that the global maximum of $f$ coincides with the nonlinear Perron eigenvector of a suitably defined parameter dependent matrix $M(x)$, i.e. the positive solution to the nonlinear eigenvector problem $M(x) x = \lambda x$. Using recent advances in nonlinear Perron–Frobenius theory, we show that \eqref{eq:1ft} has a unique solution and we propose a nonlinear power-method type scheme that allows us to solve \eqref{eq:1ft} with global convergence guarantees and effectively scales to very large and sparse matrices. We present several numerical experiments showing that the new method largely outperforms baseline techniques.

Link to slideshare presentation: Optimal L-shaped matrix reordering via nonlinear matrix eigenvectors

Top 20 most cited papers on SIMAX

I’m very excited (and honored!) to find out two papers of mine:

are in the top 20 most cited papers on SIMAX (SIAM Journal on Matrix Analysis and Applications) since 2018. Huge thanks to my amazing collaborators Matthias Hein and Antoine Gautier.

Dayana has succesfully passed her admission exam and officially joined our group of Numerical analysis and data science, starting her PhD work on Mathematics of Adversarial Attacks with me. Welcome!

Paper open-access on Journal of Scientific Computing

Excited that our paper The Global Convergence of the Nonlinear Power Method for Mixed-Subordinate Matrix Norms, joint work with Antoine Gautier and Matthias Hein, has been published open-access on the Journal of Scientific Computing

New paper out

Hitting times for second-order random walks

Abstract: A second-order random walk on a graph or network is a random walk where transition probabilities depend not only on the present node but also on the previous one. A notable example is the non-backtracking random walk, where the walker is not allowed to revisit a node in one step. Second-order random walks can model physical diffusion phenomena in a more realistic way than traditional random walks and have been very successfully used in various network mining and machine learning settings. ... Read more

Minitutorial @ SIAM LA 2021

I am very excited I will be giving today a minitutorial on Applied Nonlinear Perron–Frobenius Theory at the SIAM conference on Applied Linear Algebra (LA21).
I will present the tutorial together with Antoine Gautier.

Here you can find the webpage of the minitutorial.

Abstract

Nonnegative matrices are pervasive in data mining applications. For example, distance and similarity matrices are fundamental tools for data classification, affinity matrices are key instruments for graph matching, adjacency matrices are at the basis of almost every graph mining algorithm, transition matrices are the main tool for studying stochastic processes on data. The Perron-Frobenius theory makes the algorithms based on these matrices very attractive from a linear algebra point of view. At the same time, as the available data grows both in terms of size and complexity, more and more data mining methods rely on nonlinear mappings rather than just matrices, which however still have some form of nonnegativity.The nonlinear Perron-Frobenius theory allows us to transfer most of the theoretical and computational niceties of nonnegative matrices to the much broader class of nonlinear multihomogeneous operators. These types of operators include for example commonly used maps associated with tensors and are tightly connected to the formulation of nonlinear eigenvalue problems with eigenvector nonlinearities. In this minitutorial we will introduce the concept of multihomogeneous operators and we will present the state-of-the-art version of the nonlinear Perron-Frobenius theorem for nonnegative nonlinear mappings. We will discuss several numerical optimization implications connected to nonlinear and higher-order versions of the Power and the Sinkhorn methods and several open challenges, both from the theoretical and the computational viewpoints. We will also discuss a number of problems in data mining, machine learning and network science which can be cast in terms of nonlinear eigenvector problems and we will show how the nonlinear Perron-Frobenius theory can help solve them.

Minisymposium @ SIAM LA

Piero Deidda, Mario Putti and I are organizing a minisymposium on Nonlinear Laplacians on graphs and manifolds with applications to data and image processing within the SIAM Conference on Applied Linear Algebra 2021, happening virtually during the week May 17-21, 2021. See also the conference’s virtual program.

Our mini is scheduled on May 17, staring at 9:35am Central time (New Orleans)

Abstract

Nonlinear Laplacian operators on graphs and manifolds appear frequently in computational mathematics as they are widely used in a diverse range of applications, including data and image processing problems such as clustering, semi-supervised learning, segmentation. A multitude of recent work has shown that these operators may increase the performance of classical algorithms based on linear Laplacians. This has led to several developments on both the theoretical and the numerical aspects of nonlinear Laplacian operators on graphs and manifolds, including non-differential extreme cases such as the infinity-Laplacian, the 1-Laplacian and p-Laplacians with negative exponent. In this minisymposium we aim at sampling both theoretical and applied recent work in this active area of research.

Speakers:

  • Martin Burger — Nonlinear Spectral Decompositions Related to P-Laplacians
  • Qinglan Xia — P-Laplacians on Graphs with a Negative Exponent, and its Relation to Branched Optimal Transportation
  • Dong Zhang — Piecewise Multilinear Extension and Spectral Theory for Function Pairs
  • Piero Deidda — Nodal Domain Count and Spectral Properties of Generalized P-Laplacians on Graphs
  • Abderrahim Elmoataz — Game P-Laplacian on Graph: from Tug-of-War Games to Unified Processing in Image and Points Clouds Processing
  • Dimosthenis Pasadakis — Multiway P-Spectral Clustering on Grassmann Manifolds
  • Pan Li, Strongly Local Hypergraph Diffusions for Clustering and Semi-Supervised Learning
  • Shenghao Yang, P-Norm Hyper-Flow Diffusion

Presenting today @ TU Eindhoven

I am presenting today my work on semi-supervised learning with higher-order graph data at the CASA Colloquia at Eindhoven University of Technology (NE). Thanks Stefano Massei for the kind invitation!

Presenting today @ Joint GSSI-Sapienza AI seminar

I’m excited to open up today the joint GSSI-Sapienza seminar series on Artificial Intelligence with a talk on nonlinear diffusion methods for semi-supervised learning, joint work with Austin Benson and Konstantin Prokopchik. Thanks Giacomo Gradenigo for the kind invitation!


New paper out

Nonlinear Feature Diffusion on Hypergraphs

Abstract: Hypergraphs are a common model for multiway relationships in data, and hypergraph semi-supervised learning is the problem of assigning labels to all nodes in a hypergraph, given labels on just a few nodes. Diffusions and label spreading are classical techniques for semi-supervised learning in the graph setting, and there are some standard ways to extend them to hypergraphs. However, these methods are linear models, and do not offer an obvious way of incorporating node features for making predictions. ... Read more

News highlights

*/}}