lammps Overview
LAMMPS stands for Large-scale Atomic/Molecular Massively Parallel Simulator. It has potentials for soft materials (biomolecules, polymers) and solid-state materials (metals, semiconductors) and coarse-grained systems. It can be used to model atoms or, more generically, as a parallel particle simulator at the mesoscale or continuum levels.
Support
Usage
LAMMPS is available on Titan and Eos as a module. For the default LAMMPS module, you will need to use the GNU programming environment and load the fftw module before loading the lammps module. The following commands will add thelmp_titan
executable, or lmp_eos
if running on Eos, to your PATH:
$ module swap PrgEnv-pgi PrgEnv-gnu
$ module load fftw
$ module load lammps
An example batch script to use to run a LAMMPS job on Titan is provided below:
#!/bin/bash
#PBS -A <project id>
#PBS -N lammps_test
#PBS -j oe
#PBS -l walltime=1:00:00,nodes=1500
export CRAY_CUDA_MPS=1
aprun -n 24000 lmp_titan < input
Setting $CRAY_CUDA_MPS
to 1 will enable CUDA Proxy and allow LAMMPS to directly access the GPU's memory. By default, this is not enabled on Titan. For more info about CUDA Proxy and GPU contexts, see: https://www.olcf.ornl.gov/tutorials/cuda-proxy-managing-gpu-context/.Builds
TITAN
- lammps@2017.03.31%gcc@4.9.3+colloid+dipole+gpu+kokkos+kspace~lib+manybody+meam+misc+molecule+reax+rigid
- lammps@2017.03.31%gcc@4.9.3+colloid+dipole+gpu+kokkos+kspace~lib+manybody+meam+misc+molecule+reax+rigid