We view this work as an obvious continuation of our prior work to date in this area, which has focused upon deploying small GPU clusters for small radio telescopes. SKA telescopes raise the computation and communication requirements significantly, such that ascertaining a suitable architecture is a non-trivial task. This work raises the bar significantly over our’s and other’s previous, since we aim to investigate the regime of “large N” radio telescopes, where N is the number of radio telescope antennas (dishes). Due to the quadratic scaling of computation, this raises the problem presently deployed telescopes which are O(10) Tflops to the multi-Petaflop regime. The regime in which we seek to explore has been untouched by astronomers and is only discussed in the contexts of what may be possible “tomorrow”. Demonstrating that an HPC system can process these large data sets will have huge significance in terms of how future radio telescopes are designed, with a successful effort weighing strongly in favour of HPC over custom hardware approaches. The expectation is that an “off-the-shelf” supercomputer will have far decreased cost versus a custom processing pipeline. A successful demonstration of strong scaling, or in astronomy terms, that bandwidth scales with the number of nodes has special significance since this would demonstrate that science goals can be simply related to the size of cluster.
|Source||Hours||Start Date||End Date|