Low-Rank approximation and structured matrices
This research area deal with the problem of efficiently finding low-rank factorization of matrices $A$, in the form
$$A \approx UV^T,$$
where $U,V$ are tall and skinny matrices. In most cases of interest one assumes to either be able to evaluate $X \mapsto AX$ and $Y \mapsto A^TY$ or to sample selected entries of $A$ at a reduced cost. Several strategies may be adopted for retrieving matrices $U,V$, such as randomized linear algebra methods, or cross-approximation techniques.
A natural evolution of this idea is to consider structured matrices, which are – in general – not low-rank, but may have low-rank blocks, or obtained as low-rank perturbations of particularly structured matrices. The aim is to represent such matrices using $\mathcal O(n)$ or even $\mathcal O(1)$ storage and perform operations with a similar complexity (up to logarithmic factors in the dimension).
In particular, the following structures are considered:
- Rank-structured matrices: Matrices with off-diagonal blocks of low-rank (HODLR), which may be efficiently representable using hierarchically structured bases (HSS).
- Low-rank perturbations of Toeplitz matrices: matrices with this structure form an algebra, which has important applications in the study of queues and Markov chains. See also the section on the numerical solution of Markov chains below. Matrices in this class can often be represented with a storage that is independent of the dimensions, enabling the numerical study of infinite and semi-infinite matrices.
For matrices in this form, efficient arithmetic operations are available. MATLAB toolboxes for handling such matrices are available (see hm-toolbox for rank-structured matrices and cqt-toolbox for quasi-Toeplitz ones).
- Members: P. Boito, L. Gemignani, S. Massei, B. Meini, L. Robol.
- Collaborations: D. Kressner (EPFL), B. Iannazzo (UniPG), D. A. Bini
Nonlinear eigenvalue problems and polynomial rootfinding
Given a matrix-valued function $F(z)$, the eigenvalues are the points in the complex plane where $F(z)$ has a non-trivial kernel, or its rank drops. Computing the eigenvalues and eigenvectors (the vectors in the kernel) for general nonlinear function is challenging, and can often by achieved by appropriate matrix iterations, or by linearizing the problem and solving a generalized eigenvalue problem.
The latter strategy is particularly appealing when $F(z)$ is either a matrix or a scalar polynomial. For this specific case, particularly structured version of the QR iteration have been developed. For the scalar case, the group has developed a software package capable of computing roots of high-degree polynomial with arbitrary accuracy (MPSolve).
This area is closely related with structured matrices, in particular semiseparable and rank-structured one.
- Members: P. Boito, L. Gemignani, B. Meini, L. Robol, F. Poloni
- Collaborations: G. Fiorentino, D. A. Bini
Matrix Equations and Matrix Functions
Many problems from the real world and from Scientific Computing are modeled by matrix equations or by matrix functions. For instance, the celebrated algebraic Riccati equation which in the continuous time models takes the form $XBX+AX+XD+C=0$, is related with the analysis of stability of dynamical systems. Here, $A,B,C,D$ are given matrices of compatible sizes and $X$ is the unknown matrix. Quadratic equations like $AX^2+BX+C=0$ model damped vibration problems as well as stochastic models encountered in the analysis of queues. In the case of queues of the M/G/1 type the equations take the form $\sum_{i=-1}^{\infty} A_{i}X^{i+1}=0$ where a matrix analytic function over a suitable domain is involved.
The goal of the research in this area is to develop tools for designing fast and effective algorithms to solve this kind of equations. These equations, as well as similar ones, can be recast as generalized eigenvalue problems; for instance, the latter is related to finding $\lambda\in\mathbb{C}$ and $x\in\mathbb{C}^{n}$ such that $(\lambda^2 P +\lambda Q + R)x=0$ (quadratic eigenvalue problem). The techniques needed combine the ones used for general nonlinear equations (multivariate Newton methods, fixed-point iterations) and eigenvalue problems (Schur decompositions, orthogonal reductions, rational approximations).
Matrix structures (such as entrywise nonnegativity, symmetry and symplecticity) play a crucial role in all of this; they are needed for defining the solutions of these equations in the first place, and to ensure feasibility, accuracy and computational efficiency of the numerical algorithms.
Applications of these equations include control and system theory, queuing theory and structured Markov chain modelling in applied probability, and time series estimaton in statistics. It is useful to interface directly with researchers working in these application fields: the different points of view provide useful insight, and the algorithms can be better tailored to the needs of the practitioners.
In several applications, different sets of measurements produce different symmetric positive-definite matrices as a result; a natural problem is finding the most plausible correct value for the desired matrix. This corresponds to a form of averaging. The plain arithmetic mean $\frac{1}{n}(X_1+X_2+\dots+X_n)$ is not always the best choice from this point of view.
The concept of geometric mean of a set of positive number can be extended to the case of a set of positive definite matrices. However, this extension is not so trivial and, while for the case of two matrices there is a unique definition, in the case of several matrices there exist an infinite number of valid definitions.
The most general setting under which to study this problem is the one of Riemannian geometry: one gives a Riemannian scalar product on the manifold of symmetric positive-definite matrices, and studies the point which minimizes the sum of squared distances from the given matrices (Cartan mean). This gives rise to a mean which is compatible in some sense with matrix inversions, much like the geometric mean in the scalar case.
In addition to the theoretical aspects, practical computation of these means is an interesting problem, a special case of optimization on manifolds. The classical multivariate methods from optimization need to be adapted to work on a generic manifold; one needs to consider the role of its tangent space and construct suitable maps to and from it.
This kind of mean is of great importance in the applications, especially in engineering and in problems of radar detection.
Our goal is to provide a better understanding of the different concepts involved in this research, new definitions of mean which are better suited for the applicative models, faster and more reliable algorithms for their computation.
- Members: D.A. Bini, P. Boito, B. Meini, F. Poloni.
- Collaborations: Sophie Hautphenne (Melbourne), C.-H. Guo (Regina), B. Iannazzo (Perugia), Guy Latouche (Bruxelles), S. Massei (TU Eindhoven), V. Mehrmann (Berlin), G.T. Nguyen (Adelaide), V. Ramaswami (AT&T Labs, New York), G. Sbrana (Rouen), C. Schroeder (Berlin), T. Reis (Hamburg), R. Vandebril (KU Leuven).
Markov chains and Complex Network
Many problems from the applications are modeled by Markov chains. Very often the set of the states is huge or even infinite. In these cases customary techniques are not suited to solve this kind of problems.
The goal of this research area is the design and analysis of effective solution algorithms for infinite Markov chains, with special attention to the ones coming from queuing models. This goal is reached by developing theoretical tools relying on on complex analysis, numerical analysis, and structured matrix computations.
A close connection exists between Markov chains and complex networks, which are graphs with particular structures. On these graphs, it is of great interest in applications to define centrality measures, that rank the nodes by importance. The computation of such measures can be challenging for large networks, and techniques based on iterative methods for functions of matrices and large-scale linear systems can be adopted in this context.
- Members: P. Boito, S. Massei, B. Meini, F. Poloni.
- Collaborations: P. Favati (CNR Pisa), C. H. Foh (Surrey), G. Latouche (Bruxelles), V. Ramaswami (AT&T Labs, New York), B. Van Houdt (Antwerp), M. Zukerman (Hong Kong), D.A. Bini, S. Steffè.
High-Performance Computing
Research in numerical linear algebra for high-performance computing focuses on developing efficient algorithms and techniques for solving large-scale linear algebra problems that arise in various computational tasks, such as solution of partial differential equations, data analysis, optimization, and machine learning.
This field addresses challenges related to the scalability, accuracy, and performance of numerical methods when dealing with massive datasets or complex mathematical models. We investigate algorithms that can exploit the parallelism and vectorization capabilities of modern high-performance computing architectures, such as multi-core CPUs, GPUs, and specialized accelerators like FPGAs or TPUs. A particular emphasis is given to open-source software development and the construction and implementation of innovative algorithms into the PSCToolkit suite of libraries (https://psctoolkit.github.io/), and into the development of preconditioners to improve the convergence rate and stability of iterative methods for solving large sparse linear systems, particularly important for solving systems arising from partial differential equations.
- Members: F. Durastante, L. Heltai
- Collaborators: Pasqua D’Ambra (IAC-CNR), Salvatore Filippone (UNITOV)
Quantum walks and block encodings
The research in this area focuses on Quantum walks and Quantum Block Encodings. Quantum walks are the quantum counterpart of random walks on a graph and, like classical random walks, they can be defined in a continuous-time or in a discrete-time framework. Continuous-time quantum walks evolve according to the Schroedinger equation on a suitable Hilbert space. For discrete-time quantum walks, on the other hand, several different definitions of the evolution operator are available, in most cases according to the coined or the Szegedy formalisms. Both continuous-time and discrete-time quantum walks have been proved to be universal for quantum computation.
Quantum walks are often used for algorithm design. In particular, thanks to their favorable diffusion properties, they are often employed in the context of unstructured search problems, resulting in a quadratic speedup with respect to classical approaches. Further applications include the formulation of quantum centrality and communicability measures in network analysis.
Research activity in this field focuses on the study of hitting time for various kinds of discrete-time quantum walks on directed graphs, and on the applications of continuous- and discrete-time quantum walks in network analysis.
Quantum Block Encoding (QBE), instead, is a fundamental ingredient in quantum numerical linear algebra. Its importance is motivated by the fact that quantum evolution is necessarily unitary: in particular, all matrices need to be represented in unitary form, for instance as a suitable quantum circuit. Roughly speaking, a block-encoding of a matrix $M$ is a unitary matrix $U$ that contains $M/\alpha$ as its top left block, where $\alpha\geq\|M\|_2$ is a scaling factor. Clearly, the presence of structure in $M$ facilitates the development of effective block-encoding strategies, which are crucial to ensure significant speedup in quantum algorithms — for instance, solving an $N\times N$ linear system with runtime polylogarithmic in $N$.
- Members: P. Boito, G. Del Corso
- Collaborators: Anna Bernasconi (UniPi), Roberto Grena (ENEA)