Our group at COMPiLE Lab focuses on **mathematical and computational aspects of machine learning and network science**. Our research combines tools from applied mathematics, numerical analysis, scientific computing, graph and spectral theories to address pressing challenges in modern machine learning and its applications to classification, language processing, and analysis of complex systems.

We are always looking for new talented and motivated students and postdoctoral scholars to join the team. If you are interested in working with us please get in touch to discuss enrollment in the PhD programs at University of Edinburgh (UK) or GSSI (Italy) or as well as available postdoc positions. Some info is available below, but it might be outdated.

On a rolling basis I accept motivated students for internship/thesis projects from UoE students on:

- Theory of deep learning
- Model-order reduction for deep learning
- Scientific machine learning
- Machine learning on graphs
- Adversarial machine learning
- Learning dynamical systems

# Examples of available PhD project topics

### Structured reduced-order deep learning for scientific and industrial applications

Deep learning has revolutionized various domains, including computer vision, language processing, and scientific computing. However, the growing complexity of neural network architectures alongside the ever-growing size of the available data, has led to computational challenges, making it increasingly important to develop techniques that reduce the number of parameters while maintaining performance, robustness, and model-specific underlying structures. This project topic proposes to investigate structure-preserving parameter reduction methods for neural networks with applications in computer vision, language models, reinforcement learning, and scientific simulation. The project will combine state-of-the-art deterministic and randomized techniques in numerical linear algebra, numerical optimization on smooth manifolds, bi-level optimization, numerical integration of matrix and tensor differential equations, to analyze performance, bias, robustness, and generalization of modern deep learning architectures and design new efficient learning algorithms that reduce memory requirements and training time.

Specific objectives of the project topic are:

- To develop innovative techniques to reduce model-order of deep learning pipelines while preserving desirable data structures
- To investigate theoretical guarantees of generalization, approximation, and robustness of compressed models
- To implement high-performing software libraries optimizing model accuracy, efficiency, and scalability
- To apply the developed techniques to solve practical real-world and cross-domain problems in computer vision, language models, dynamical systems, and scientific simulations

The successful student will have opportunities for collaboration within and outside the School of Mathematics at UoE, including industries and scientific labs.

### Modern numerical linear algebra techniques for efficient learning and optimization

Deep learning ultimately boils down to solving optimization problems with functional constraints. In a variety of modern application settings, these constraints are defined in terms of (partial) differential equations (PDEs). Examples include so-called neural differential equations, where the neural network architecture is designed implicitly as the solution of a PDE, and physics-informed neural networks, where the differential nature of the problem is embedded into the loss function.

These PDE-inspired neural network architectures yield several advantages over standard deep learning models as one can, for instance, exploit the broad literature on dynamical systems and PDEs to analyze their stability and to design more efficient learning algorithms that use fast and robust numerical solvers.

In this project we will combine state-of-the-art deterministic and randomized linear algebra techniques for parallel-in-time integration and optimization, including problems with PDE constraints, to improve the efficiency of learning algorithms. We will consider applications to computer vision, language models, and scientific simulations. A wide variety of problems can be cast as optimization problems with constraints defined in terms of (partial) differential equations, motivating the combination of linear algebra solvers for optimization with deep learning techniques.

The supervisors have a range of experience in numerical linear algebra-inspired algorithms for machine learning, and fast and robust linear algebra solvers for optimization problems constrained by PDEs.

The student on this project should have experience with computational mathematics, for example numerical linear algebra, numerical solution of PDEs, optimization, and coding, as well as an interest in applying such techniques to modern machine learning algorithms.

# PhD program at UoE

We welcome applications evert year through the Maxwell Institute’s Graduate School PhD programme in Modelling Analysis and Computation (MAC-MIGS). The application deadline is around end of January and The start date is September. The duration of the PhD programme is 4 years.

The Maxwell Institute for Mathematical Sciences brings together research activities in mathematical sciences at Edinburgh and Heriot-Watt Universities. Members of the Maxwell Institute are academics from the School of Mathematics at the University of Edinburgh and the Departments of Mathematics and of Actuarial Mathematics and Statistics at Heriot-Watt University. The Maxwell Institute builds on the long history of collaboration between the three departments, best exemplified by the establishment and continued operation of the International Centre for Mathematical Sciences (ICMS). Since 2018, the Maxwell Institute has a physical home on the top floor of the Bayes Centre which it shares with the ICMS, creating a hub for mathematical sciences research, training and applications in central Edinburgh.

**The MAC-MIGS 2024 programme**

The MAC-MIGS 2024 programme offers *bespoke, cohort-based training in applied and computational mathematics*. During the 4-year programme, students will take 90 credits worth of courses, with the following specialised courses being offered:

- Semester 1, year 1: A 15 credit Group Project with an industrial partner
- Semester 1 + Semester 2, year 1: Two 15 credit courses on Contemporary topics in Applied and Computational Mathematics
- The remainder of the credits will be chosen from the Scottish Mathematical Sciences Training Centre (SMSTC), as well as advanced courses offered at University of Edinburgh and/or Heriot-Watt University.
- In addition to taught courses, students will attend the weekly Applied and Computational Mathematics seminar.
- Further cohort-building activities, such as staff-student lunches and elevator pitches of research topics, will be organised throughout.

The students trained on this programme will have expertise in a broad array of modern mathematical methodologies, including for example Bayesian inference, uncertainty quantification, numerical analysis, machine learning, and fluid dynamics, and of their application in multidisciplinary contexts as well as experience of industrial collaboration.

# PhD program at GSSI

GSSI is an international school of advanced studies in Mathematics,
Physics, Computer Science and Social Science. Located in the city of L’Aquila (Italy), in the heart of the Apennine Mountains east of Rome,
the institute offers a stimulating environment, with numerous PhD
students and postdoctoral researchers selected internationally every
year. English is the official language of the institute.

Applications for the PhD program at GSSI open every year in March with several open positions in all the four areas. Deadline for the application is usually in mid June.

The PhD program consists of four years with qualifying exams to be taken during the first year. The scholarship for the program offers a monthly salary/stipend, research funds and free housing in institute’s residences in the city center of L’Aquila.

More details on how to apply can be found here and here. I am affiliated with the School of Mathematics, but can supervise thesis from both the mathematics and the computer science areas.

# Postdoc positions at GSSI

The math division at GSSI opens to postdoctoral applications from highly motivated early-career researchers once or twice a year. Positions are usually for the entire school of math and are highly competitive. If you are interested in doing a postdoc with us at GSSI, please contact me and do check out the Home and the Announcements pages on the GSSI webpage for open calls for postdoctoral positions.