J. Austin Ellis is an HPC Research Scientist in the Analytics and AI Methods at Scale group within the Oak Ridge Leadership Computing Facility (OLCF). His research is primarily focused on high performance computing, machine learning, data analytics, scalable algorithms, and GPU computing. He has a PhD in Applied Mathematics from North Carolina State University and was a postdoc in the Scalable Algorithms group at Sandia National Laboratories. From 2016 to 2018, he was a PhD intern at ORNL in the HPC Methods for Nuclear Applications group working on the leadership Shift Monte Carlo radiation transport code.


North Carolina State University
Applied Mathematics
Doctor of Philosophy (Ph.D.)
The University of North Carolina at Chapel Hill
Applied Mathematics
Bachelor of Science (B.S.)


2022 — #1 Ranking HPL-AI Mixed Precision Benchmark with Frontier (6.861 exaflops)

2021 — Best Paper, "Revealing power, energy and thermal dynamics of a 200PF pre-exascale supercomputer" at SC21

2020 — ORNL Innovation Award, Exnihilo radiation transport code suite

2019 — Innovation Award, "Scalable Inference for Sparse Deep Neural Networks using Kokkos Kernels" at HPEC 2019 Graph Challenge (Sparse ML)

2018 — Smoky Mountain Data Challenge Champion, "Impact of Urban Weather on Energy Use"


Frank J. Alexander, et al. Co-design Center for Exascale Machine Learning Technologies (ExaLearn). The International Journal of High Performance Computing Applications, 35(6):598-616 (2021). doi:
J. Austin Ellis, Lenz Fiedler, Gabe A. Popoola, Normand A. Modine, J. Adam Stephens, Aidan P. Thompson, Attila Cangi, and Siva Rajamanickam. Accelerating finite-temperature Kohn-Sham density functional theory with deep neural networks. Phys. Rev. B 104, 035120 (2021). doi:
Woong Shin, Vladyslav Oles, Ahmad Maroof Karimi, J. Austin Ellis, and Feiyi Wang. Revealing power, energy and thermal dynamics of a 200PF pre-exascale supercomputer. International Conference for High Performance Computing, Networking, Storage and Analysis (SC '21). ACM, New York, NY, USA, Article 12, 1–14. doi:
Gordon E. Moon, J. Austin Ellis, Aravind Sukumaran-Rajam, Srinivasan Parthasarathy, and P. Sadayappan. 2020. ALO-NMF: Accelerated Locality-Optimized Non-negative Matrix Factorization. 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD'20). Association for Computing Machinery, New York, NY, USA, 1758–1767. doi:
J. Austin Ellis and Siva Rajamanickam, Scalable Inference for Sparse Deep Neural Networks using Kokkos Kernels. 2019 IEEE High Performance Extreme Computing Conference (HPEC), Waltham, MA, USA, 2019, pp. 1-7, doi:
J. Austin Ellis, Thomas M. Evans, Steven P. Hamilton, C.T. Kelley, and Tara M. Pandya. Optimization of processor allocation for domain decomposed Monte Carlo calculations. Parallel Computing, 87:77-86, 2019.
Alex Toth, J. Austin Ellis, Thomas M. Evans, Steven P. Hamilton, C.T. Kelley, Roger Pawlowski, and Stuart Slattery. Local improvement results for Anderson acceleration with inaccurate function evaluations. SIAM J. Sci. Comp., 39(5):S47- S65, 2017.