Tensor Algebra: Multidimensional data (ubiquitous in scientific computing and machine learning) can be effectively treated via tensor abstractions.
Dense and sparse tensor algebra, tensor decompositions, and tensor networks pose challenges in design of efficiency, software abstractions, and numerical methods.
Matrix Computations: Numerical linear algebra underlies most computational approaches in the data sciences. Fast matrix algorithms provide solutions for nonlinear optimization, low-rank approximation, and eigenvalue problems.
Quantum Systems: Tensor representations provide the most natural way to computationally model entanglement (correlation between electrons). We investigate numerical parallel algorithms for tensor computations arising in quantum chemistry (e.g. high-accuracy electronic structure calculations) and quantum computation (e.g. quantum circuit simulation).
Communication-Avoiding Algorithms: Performance and scalability of algorithms and libraries is constrained by data movement in the memory hierarchy and network. We aim to design parallel algorithms that minimize the amount of communication and number of messages. Our group designs such algorithms for problems from a variety of domains, including graph problems, parallel sorting, bioinformatics, and numerical tensor computations.
High Performance Numerical Libraries: Parallel numerical libraries are the glue between fast algorithms and real-world applications. We pursue application-driven research on algorithms by way of developing general and scalable library routines.
(February 2020) Find us at SIAM PP 2020, where there will be three talks and three poster presentations by six participants of our group.
(January 2020) Annika Dugad and Tianyi Hao will be giving poster presentations at QIP 2020, stop by and learn about our work on approximate techniques quantum circuit simulation of quantum algorithms for ground states and dynamics.
(October 2019) We have released new papers on tensor completion and tensor decomposition, as well as a survey on fast convolution algorithms.
(October 2019) Congratulations to Linjian Ma and Yuchen Pang on being awarded Computer Science Gene Golub Fellowships.
(May 2019) See Edward Hutter's presentation on Communication-avoiding Cholesky-QR2, accepted as regular conference paper at IPDPS 2019 in Rio de Janeiro, Brazil.
(May 2019) Welcome (back) to Yuchen Pang and Linjian Ma, who plan to start their PhD work this fall at UIUC as part of LPNA!
(May 2019) LPNA undergraduate researchers are becoming graduates! Zecheng Zhang and Qile Zhi will be pursuing MS programs at Stanford, Naijing Zhang will pursue an MSE at UC Berkeley, Xiaoxiao Wu will pursue an MSCF at CMU, Siyuan Zhang will pursue MS at UIUC, David Zhang will pursue an MS at Cornell, and Hongru Yang will pursue a PhD at UT Austin!
(May 2019) Congratulations to Caleb Ju for winning the Franz Hohn and J.P. Nash Scholarship.
(Dec 2018) Edward Hutter is one of two UIUC PhD students in the DOE CSGF class of 2018 (CS @ Illinois news article).
(May 2018) Congratulations to Pavle Simonovic (completed BS thesis) and Peter Tatkowski (joining ETH Zurich MS program)!
(May 2018) Congratulations to Qile Zhi for winning the Franz Hohn and J.P. Nash Scholarship.
(June 2017) Congratulations to Tobias Wicky for finishing his MS thesis and to Edward Hutter for finishing his BS thesis!
We are always looking for new collaborators and participants. If you are a UIUC student interested in doing research in the area, email Edgar Solomonik (email@example.com).
web-course Numerical analysis Spring 2018, Fall 2018; CS 450
web-course Parallel numerical algorithms Fall 2017; CS 554
video June 2017; LPNA Lecture; Basics of tensors (Edgar)
video June 2017; LPNA Lecture; Basics of communication complexity (Edgar)
web-course Communication cost analysis of algorithms Fall 2016; CS 598-ES
|report||Caleb Ju and Edgar Solomonik. Derivation and analysis of fast bilinear algorithms for convolution arXiv:1910.13367 [math.NA], October 2019.|
|report||Navjot Singh, Linjian Ma, Hongru Yang, and Edgar Solomonik. Comparison of accuracy and scalability of Gauss-Newton and alternating least squares for CP decomposition arXiv:1910.12331 [math.NA], October 2019.|
|report||Zecheng Zhang, Xiaoxiao Wu, Naijing Zhang, Siyuan Zhang, and Edgar Solomonik. Enabling distributed-memory tensor completion in Python using new sparse tensor kernels arXiv:1910.02371 [cs.DC], October 2019.|
|article||Edward Hutter and Edgar Solomonik Communication-avoiding Cholesky-QR2 for rectangular matrices IEEE International Parallel and Distributed Processing Symposium (IPDPS), Rio de Jianero, Brazil, May, 2019, to appear.|
|report||Linjian Ma and Edgar Solomonik Accelerating Alternating Least Squares for Tensor Decomposition by Pairwise Perturbation> arXiv:1811.10573 [math.NA], November 2018.|
|article||Tobias Wicky, Edgar Solomonik, and Torsten Hoefler Communication-avoiding parallel algorithms for solving triangular systems of linear equations IEEE International Parallel and Distributed Processing Symposium (IPDPS), Orlando, FL, June 2017, pp. 678-687. report|