Communication Complexity: Performance and scalability of algorithms and libraries is constrained by data movement in the memory hierarchy and network.
We aim to design parallel algorithms that minimize the amount of communication and number of messages.
Matrix Computations: Numerical linear algebra underlies most computational approaches in data sciences. Fast matrix algorithms provide solutions for nonlinear optimization, low-rank approximation, and eigenvalue problems.
Tensor Algebra: Multidimensional data (ubiquitous in scientific computing and machine learning) can be effectively treated via tensor abstractions. Tensors and their decompositions provide numerical tools for hypergraph processing, high-order methods, and neural networks.
High Performance Numerical Libraries: Parallel numerical libraries are the glue between fast algorithms and real-world applications. We pursue application-driven research on algorithms by way of developing general and scalable library routines.
(June 2016) Congratulations to Tobias Wicky for finishing his MS thesis and to Edward Hutter for finishing his BS thesis!
(June 2016) Group webpage is up, welcome!
Tobias Wicky (MS 2017): A communication-avoiding algorithm for solving linear systems of equations with selective inversion
|report||Tobias Wicky, Edgar Solomonik, and Torsten Hoefler Communication-avoiding parallel algorithms for solving triangular systems of linear equations IEEE International Parallel and Distributed Processing Symposium (IPDPS), Orlando, Florida, June 2017 (to appear), arXiv:1612.01855 [cs.DC], December 2016. bibtex|