Francesco Tudisco

Associate Professor (Reader) in Machine Learning

School of Mathematics, The University of Edinburgh
The Maxwell Institute for Mathematical Sciences
School of Mathematics, Gran Sasso Science Institute JCMB, King’s Buildings, Edinburgh EH93FD UK
email: f dot tudisco at ed.ac.uk

Mixture of Neural Operators: Incorporating Historical Information for Longer Rollouts

Harris Abdul Majid, Francesco Tudisco,
In: International Conference on Learning Representations (ICLR) Workshop on AI4DifferentialEquations In Science, (2024)

Abstract

Traditional numerical solvers for time-dependent partial differential equations (PDEs) notoriously require high computational resources and necessitate recomputation when faced with new problem parameters. In recent years, neural surrogates have shown great potential to overcome these limitations. However, it has been paradoxically observed that incorporating historical information into neural surrogates worsens their rollout performance. Drawing inspiration from multistep methods that use historical information from previous steps to obtain higher-order accuracy, we introduce the Mixture of Neural Operators (MoNO) framework; a collection of neural operators, each dedicated to processing information from a distinct previous step. We validate MoNO on the Kuramoto-Sivashinsky equation, demonstrating enhanced accuracy and stability of longer rollouts, greatly outperforming neural operators that discard historical information.

Please cite this paper as:

@inproceedings{majid2024mixture,
  title={Mixture of neural operators: Incorporating historical information for longer rollouts},
  author={Majid, Harris Abdul and Tudisco, Francesco},
  booktitle={ICLR 2024 Workshop on AI4DifferentialEquations In Science},
  year={2024}
}

Links: doi

Keywords: Deep learning neural networks perron-frobenius theory fixed points