In addition to the instructions below, Benjamin Hernandez of the OLCF Advanced Data and Workflows Group presented a related talk, GPU Rendering in Rhea and Titan, during the 2016 OLCF User Meeting.
On Titan
- Allocate compute resources through the batch system:
qsub -I -lnodes=2 -lwalltime=01:00:00 -Aabc123
- From within the batch job, set-up the environment:
module load GPU-render
This module makes it easy to load the appropriate modules. It is the same as running the following:
module load libglut module load VirtualGL module load libjpeg-turbo module load turbovnc module load cudatoolkit showres -n $PBS_JOBID | egrep '^[[:digit:]]+[ ]+Job' | head -1 | awk '{printf "nid%0.5d\n", $1}'
When the GPU-render module is loaded, it will print out the node id for the first compute node in the job. The node id will be used later to create an SSH tunnel.
- From within the batch job, run a script on the compute nodes that starts X and runs a test.
aprun -n $PBS_NUM_NODES -N 1 -b $MEMBERWORK/abc123/runX.sh
runX.sh:
#!/bin/sh startx & sleep 5 starttvnc :1 & export DISPLAY=:1 vglrun -v glxspheres64
On your local system
- On your local system, create a ssh tunnel using port 5908 from you local system to the compute node:
ssh -L 5908:nidXXXXX:5901 username@titan-internal.ccs.ornl.gov
- On your local system, launch the
vncviewer
.