sw-forge DescriptionOfficial Forge User Guide
Arm DDTArm DDT is an advanced debugging tool used for scalar, multi-threaded, and large-scale parallel applications. In addition to traditional debugging features (setting breakpoints, stepping through code, examining variables), DDT also supports attaching to already-running processes and memory debugging. In-depth debugging information is beyond the scope of this guide, and is best answered by the Official DDT User Guide.
Additional DDT ArticlesIn addition to the information below, the following articles can help you perform specific tasks with DDT:
Launching DDTThe first step in using DDT is to launch the DDT GUI. This can either be launched on the remote machine using X11 forwarding, or by running a remote client on your local machine, and connecting it to the remote machine. The remote client provides a native GUI (for Linux / OS X / Windows) that should be more far more responsive than X11, but requires a little extra setup. It is also useful if you don't have a preconfigured X11 server. To get started using the remote client, follow the DDT Remote Client setup guide. To use X11 forwarding, in a terminal do the following:
$ ssh -X user@<host>.ccs.ornl.gov $ module load forge $ ddt &
Running your jobOnce you have launched a DDT GUI, we can initiate a debugging session from a batch job script using DDT's "Reverse Connect" functionality. This will connect the debug session launched from the batch script to an already running GUI. This is the most widely applicable method of launching, and allows re-use of any setup logic contained in existing batch scripts. (This method can also be easily modified to launch DDT from an interactive batch session.)
- Copy or modify an existing job script. (If you don't have an existing job script, you may wish to read the section on letting DDT submit your job to the queue).
- Include the following near the top of your jobs script:
$ source $MODULESHOME/init/bash # If not already included, to make the module command available module load ddt
- Finally, prefix the
ddt --connect, e.g.:
$ aprun -n 1024 -N 8 ./myprogrambecomes:
$ ddt --connect aprun -n 1024 -N 8 ./myprogram
Attaching to a running jobYou can also use DDT to connect to an already-running job. To do so, you must be connected to the system on which the job is running. You do not need to be logged into the job's head node (the node from which
mpirunwas launched), but DDT needs to know the head node. The process is fairly simple:
- Find your job's head node:
- On Titan and Eos, run
qstat -f <jobid> | grep login_node_id. The node listed is the head node.
- On other systems, run
qstat -f <jobid> | grep exec_host. The _first_ node listed is the head node.
- On Titan and Eos, run
- Start DDT by running
module load forgeand then
- When DDT starts, select the option to "Attach to an already running program".
- In that dialog box, make sure the appropriate MPI implementation is selected. If not, click the "Change MPI" button and select the proper one.
- If the job's head node is not listed after the word "Hosts", click on "Choose Hosts".
- Click "Add".
- Type the host name in the resulting dialog box and click "OK".
- To make things faster, uncheck any other hosts listed in the dialog box.
- Click "OK" to return.
- Once DDT has finished scanning, your job should appear in the "Automatically-detected jobs" tab, select it and click the "Attach" button.
Letting DDT submit your job to the queueThis method can be useful when using the DDT Remote Client or when your program doesn't have a complex existing launch script.
module load forge.
- When the GUI starts, click the "Run and debug a program" button.
- In the DDT "Run" dialog, ensure the "Submit to Queue" box is checked.
- Optionally select a queue template file (by clicking "Configure" by the "Submit to Queue" box).
If your typical job scripts are basically only an
apruncommand, the default is fine. If your scripts are more complex, you'll need to create your own template file. The default file can be a good start. If you need help creating one, contact firstname.lastname@example.org or see the DDT User Guide.
- Click the "Parameters" button by the "Submit to Queue" box.
- In the resulting dialog box, select an appropriate walltime limit, account, and queue. Then click "OK".
- Enter your executable in the "Application" box, enter any command line options your executable takes on the "Arguments" line, and select an appropriate number of processes and threads.
- Click "Submit". Your job will be submitted to the queue and your debug session will start once the job begins to run. While it's waiting to start, DDT will display a dialog box displaying
Starting DDT from an interactive-batch job
Note: To tunnel a GUI from a batch job, the
-X PBS option should be used to enable X11 forwarding.
- Start your interactive-batch job with
qsub -I -X ...(
-Xenables X11 forwarding).
module load forge.
- Start DDT with the command
- When the GUI starts, click the "Run and debug a program" button.
- In the DDT "Run" dialog, ensure the "Submit to Queue" box is not checked.
- Enter your executable, number of processors, etc.
- Click "Run" to run the program.
Memory Debugging (on Cray systems)In order to use the memory debugging functionality of DDT on Titan, you need to link against the DDT memory debugging library. (On non-Cray systems DDT can preload the shared library automatically if your program uses dynamic linking). In order to link the memory debugging library:
module load ddt(This determines the location of the library to link).
module load ddt-memdebug(This tells the
CCcompiler wrappers to link the library).
- Re-link your program (e.g. by deleting your binary and running make).
- The behavior of
ddt-memdebugdepends on the current programming environment. For this reason, you may encounter issues if you switch programming environments after
ddt-memdebughas been loaded. To avoid this, please ensure that you unload
ddt-memdebugbefore switching programming environments (you can then load it again).
- The Fortran
ALLOCATEfunction cannot currently be wrapped when using the PGI compiler, so allocations will not be tracked, or protected.
- When using the Cray compiler, some libraries are compiled in such a way that DDT can not collect a backtrace when allocating memory. In this case, DDT can only show the location (rather than the full backtrace) for when memory is allocated.
Debugging scalar/non-MPI programs (Cray systems)Launching a debug session on the Cray systems requires the program be linked with the Cray PMI library. (This happens automatically when linking with MPI.) In addition, DDT must be told not to run your program to the
MPI_Initfunction (as it won't be called). If you are using the Cray compiler wrappers, you can load the
ddt-non-mpimodule (before linking your program) to include the PMI library. The same module should also be loaded prior to running
ddt(to tell DDT not to attempt to run to
MPI_Initduring initialization). Finally, enable the "MPI" option in the DDT run dialog. This will ensure DDT launches your program with
aprun. Using the ddt-non-mpi module with the DDT Remote Client When using the DDT Remote Client, we can't load the
ddt-non-mpi modulein to the client itself. Instead we have three options:
- If using "Reverse Connect", load the module before launching
ddt --connect ...
- Load the
ddt-non-mpimodule inside your "queue template file" (configured via the "Options" dialog).
- Load the module using the "remote script" mechanism while configuring your remote host in the DDT Remote Client. For convenience, you may specify the following file as the remote script:
Fixing Memory Leaks with DDT Leak ReportsMemory leaks occur when memory is allocated, and not correctly freed. This can be particularly problematic if the allocations are large or frequent. Over time, these leaks can degrade performance, or worse, cause the program to fail. DDT's memory debugging features allow analysis of allocated heap memory, both interactively, using the GUI, and non-interactively, using DDT's "offline debugging" mode. The information below will show you how to generate a leak report to pinpoint leaks, and eliminate them. Unlike conventional, interactive debugging, these reports can be created during a batch job, meaning you do not need to be present at the time your job is scheduled. .
Note: These instructions require DDT 5.0 or above.
Source CodeThe source code for this example can be downloaded here. The source code is contained within a git repository with tagged versions "
fix-1" and "
fix-2". In addition, the "
leak-reports" folder contains example leak reports for the different versions, and two queue submission files are included to launch the example program with and without DDT.
Linking with DDT's Memory Debugging LibraryThe first step towards creating a memory leak report is to link your program with DDT's memory debugging library. This will intercept calls to memory allocation and release functions (such as
free) and record their location in your program.
Note: Manual linking is required only for Cray systems, or when using static linking.Linking with DDT's memory debugging library can be automated by loading the
ddt-memdebugmodule. After loading your usual compilation environment, load the following modules:
$ module load forge $ module load ddt-memdebugThen re-link your program. How this is done will vary depending on your build system, but it's often sufficient to delete the application binary, and have
makeregenerate it. For our example:
$ rm mandel $ make
Launching with DDTThe next step is to launch the program with DDT. As of DDT 5.0, we can prefix our
apruncommand with the appropriate DDT command. In our example, we can edit
submit.qsubto first load the DDT module:
$ source $MODULESHOME/init/bash # Only required if used in a batch script $ module load forgeand then modify our aprun command so that:
$ aprun -n 16 ./mandelbecomes:
$ ddt --offline=leak-report.html --mem-debug=fast --check-bounds=off aprun -n 4 ./mandelThe DDT arguments used are as follows:
--offline=leak-report.html: This tells DDT to run in non-interactive "offline" mode, and save the output to leak-report.html
--mem-debug=fast: This enables the memory debugging options in DDT and uses the "fast" preset. (The "fast" present runs the fewest memory checks, in order to reduce overhead).
(The download bundle also contains a pre-modified version of
--check-bounds=off: This disables bounds checking (or "guard pages") in DDT. While this can be useful when tracking down invalid memory accesses, disabling this will reduce the runtime and memory overhead.
submit-ddt.qsub) Now we submit our batch job:
$ qsub -A <projectID> ddt-submit.qsub
Interpretting the OutputOnce the job has finished, copy the output file (
leak-report.html) to your local machine and open it with a web browser. (Alternatively, open
leak-reports/initial.htmlfrom the source code download.) Scrolling down to the leak report section, we should see something like this: For scalability reasons, DDT will limit the report to the 8 ranks with the greatest memory leakage (this can be controlled with the
--leak-report-top-rankscommand line argument). In the example shown we can see that rank 0 has leaked more memory than the others, and that most of the allocations were created by the
Packet::allocatefunction. Clicking the bar chart item for rank 0 will display a table below showing details of the allocations: This table shows allocations grouped by the backtrace taken when the allocation was made, along with source code snippets. This information can be used to identify code paths leading to the largest leaks. In our example, the first row of the table represents a single, 16MB allocation, whereas the second row represents 92 smaller allocations, totaling 14.72MB. All of these allocations share the same allocation site (as noted by "
#0 Packet::allocate() (packet.cpp:91)" at the top of each stack), but have taken different paths through the code to get there (as shown by different entries further down the stack). Once we have the allocation site (the
Packet::allocatemember function), the next step is figuring out why this allocation isn't freed. From the source code snippet, we can see that the allocation is assigned to the
iterationsvariable. Reading through the
Packetclass, we can determine that the
iterationsallocation is owned by the Packet class, and yet, the
Packet::~Packetdestructor doesn't contain code to free the allocation. The simple fix here is to add
free(pointer);to the destructor. (See"
git show fix-1" for more details). After making the fix, running "
make" will recompile the code.
Another Leak!After fixing our leak and recompiling, the next step is to verify the leak is gone, by resubmitting our job and generating a new leak report. Opening the newly-generated
leak-reports/fix-1.htmlfrom the source code download) shows the following: While we've fixed one of our leak sites, rank 0 is still leaking around 16MB of memory. Clicking the bar chart item will again show us the allocation details. Here we see that the remaining leak was created from a single allocation (again, made in
Packet::allocate). As we've already fixed the leak in the
Packetclass, we should check that the
Packetobject itself is being correctly freed. We can do this by methodically working our way up the stack. Following the backtrace, we see that
Packet::allocateis called by
Packet::stitch, which is in turn called by
PacketFactory::stitch. The source code snippet for
PacketFactory::stitchshows us that
Packet::stitchis being called on the
packetmember object, so let's verify that this is being freed. With a little reading of the source code (e.g.
packetfactory.h), we can see that
packetis a plain object member of the
PacketFactoryclass, so when a
PacketFactoryobject is freed,
packetshould be freed too. Let's jump up another level to find the origin of the
mandel.cpp) shows the
PacketFactoryis actually an instance of the derived class
factory. Hopping up one final time to
main, we can see that
factoryobject is passed to
stragegy1as a dereferenced pointer (
*factory), and dynamically allocated a little further up the function. The rest of the
mainfunction is relatively simple, and we can see that
factoryisn't directly freed (or passed to any additional function calls where it could be freed). Now we've found the source of the leak, there are a few options to fix it: We could rewrite the code to avoid the dynamic allocation entirely, or wrap the pointer in a C++
unique_ptr), but the simplest solution here is to add "
delete factory" once we have finished using the factory (i.e. after the
switchstatement). See "
git show fix-2" for more details. After making our final change, let's recompile and generate one last report to verify our leak has been fixed. Opening
leak-reports/fix-2.htmlfrom the source code download), we see: The chart may initially look busier than our other reports, but the total leaked memory is now only 16.75 kB, and the functions responsible are from various system libraries outside of our control. We've now successfully rid our program of the two memory leaks, and improved the correctness of our code. We can also be more confident that our program (at least for the current configuration) is leak-free.
Arm MAPArm MAP (part of the Arm Forge suite, with DDT) is a profiler for parallel, multithreaded or single threaded C, C++, Fortran and F90 codes. It provides in depth analysis and bottleneck pinpointing to the source line. Unlike most profilers, it's designed to be able to profile pthreads, OpenMP or MPI for parallel and threaded code. MAP aims to be simple to use - there's no need to instrument each source file, or configure.
Linking your program with the MAP Sampler (for Cray systems)In order to collect information about your program, you must link your program with the MAP sampling libraries. When using shared libraries on non-Cray systems, MAP can do this automatically at runtime. On Cray systems, this process must be performed manually. The map-static-link and map-dynamic-link modules can help with this.
- module load forge
- module load map-link-static # or map-link-dynamic
- Re-compile or re-link your program.
Do I need to recompile?There's no need to instrument your code with Arm MAP, so there's no strict requirement to recompile. However, if your binary wasn't compiled with the -g compiler flag, MAP won't be able to show you source-line information, so recompiling would be beneficial.
Note: If using the Cray compiler, you may wish to use
-G2 instead of
-g. This will prevent the compiler from disabling most optimizations, which could affect runtime performance.
Generating a MAP output fileArm MAP can generate a profile using the GUI or the command line. The GUI option should look familiar to existing users of DDT, whereas the command line option may offer the smoothest transition when moving from an existing launch configuration. MAP profiles are small in size, and there's generally no configuration required other than your existing aprun command line. To generate a profile using MAP, take an existing queue submission script and modify to include the following:
source $MODULESHOME/init/bash # May already be included if using modules module load forgeAnd then add a prefix your aprun command so that:
aprun -n 128 -N 8 ./myapp a b cwould become:
map --profile aprun -n 128 -N 8 ./myapp a b cOnce your job has completed running, the program's working directory should contain a timestamped .map file such as myapp_1p_1t_2016-01-01_12-00.map.
Profiling a subset of your applicationTo profile only a subset of your application, you can either use the
--start-after=TIMEand its command line options (see
map --helpfor more information), or use the API to have your code tell MAP when to start and stop sampling, as detailed here.
Viewing a MAP profileOnce you have collected a profile, you can then view the information using the map command, either by launching and choosing "Load Profile Data File", or by specifying the file on the command line e.g.
map ./myapp_1p_1t_2016-01-01_12-00.map(The above will require a SSH connection with X11 forwarding, or other remote graphics setup.) An alternative that provides a local, native GUI (for Linux, OS X, or Windows) is to install the Arm Forge Remote Client on your local machine. This client is able to load and view profiles locally (useful when working offline), or remotely (which avoids the need to copy the profile data and corresponding source code to your local machine). The remote client can be used for both Arm DDT and Arm MAP. For more information on how to install and configure the remote client, see the remote client setup page. For more information see the Arm Forge user guide (also available via the "Help" menu in the MAP GUI).
Additional Arm MAP resources