Communication Complexity: Performance and scalability of algorithms and libraries is constrained by data movement in the memory hierarchy and network.
We aim to design parallel algorithms that minimize the amount of communication and number of messages.
Matrix Computations: Numerical linear algebra underlies most computational approaches in data sciences. Fast matrix algorithms provide solutions for nonlinear optimization, low-rank approximation, and eigenvalue problems.
Tensor Algebra: Multidimensional data (ubiquitous in scientific computing and machine learning) can be effectively treated via tensor abstractions. Tensors and their decompositions provide numerical tools for hypergraph processing, high-order methods, and neural networks.
High Performance Numerical Libraries: Parallel numerical libraries are the glue between fast algorithms and real-world applications. We pursue application-driven research on algorithms by way of developing general and scalable library routines.
(July 2017) Congratulations to Raul Platero for being a winner of the Outstanding Oral Presentation prize at the 2017 Illinois Summer Research Symposium (ISRS)!
(June 2017) Congratulations to Tobias Wicky for finishing his MS thesis and to Edward Hutter for finishing his BS thesis!
(June 2017) Group webpage is up, welcome!
Tobias Wicky (MS 2017): A communication-avoiding algorithm for solving linear systems of equations with selective inversion
|report||Edward Hutter and Edgar Solomonik Communication-avoiding Cholesky-QR2 for rectangular matrices arXiv:1710.08471v1 [cs.DC], October 2017.|
|article||Tobias Wicky, Edgar Solomonik, and Torsten Hoefler Communication-avoiding parallel algorithms for solving triangular systems of linear equations IEEE International Parallel and Distributed Processing Symposium (IPDPS), Orlando, FL, June 2017, pp. 678-687. report|