Calculation speed comparison

 

List all the executables starting with ‚lmp‚ that are in system PATH:

for cmd in $(compgen -c lmp); do which $cmd; done

List all the executables starting with ‚vasp‚ that are in system PATH:

for cmd in $(compgen -c vasp); do which $cmd; done

Monitor the running processor’s cores within Ubuntu console by htop:

htop

VirtualBox

Solving: „Processor must have AVX support“ and „increasing the speed of Ubuntu in VirtualBox„:
  • Find the Command Prompt icon, right click it and choose Run As Administrator. Enter this command:
    bcdedit /set hypervisorlaunchtype off
  • To turn off the Windows Memory Integrity security feature, on the Windows host navigate to Start > Settings > Update & Security > Windows Security > Device security > Core isolation > Memory integrity. Alternatively, you can disable VBS completely in the Group Policy Editor under Computer Configuration > Administrative Templates > System > Device Guard > Turn On Virtualization Based Security.
  • In settings of VirtualBox of installed Ubuntu, go to System -> Acceleration, and select none in paravirtualization
  • If you encounter an error with WSL2 not being able to run it due to ‚Wsl/Service/CreateInstance/CreateVm/HCS/HCS_E_HYPERV_NOT_INSTALLED‘, doing this helped me:
    Control Panel > Programs > Turn Windows features on or off > Untick Hyper-V > Ok > Restart
    Control Panel > Programs > Turn Windows features on or off > Tick Hyper-V > Ok > Restart

LAMMPS

System:

4 000 000 Al atoms, eam/alloy, 500 steps, fix nve 

Download the input file for LAMMPS (based on https://lammpstube.com/2019/11/14/lattice-parameter-calculation/):
https://implant.fs.cvut.cz/test-in/
Download the used eam/alloy potential (or find it in installed LAMMPS folder): 
https://implant.fs.cvut.cz/alcu-eam-alloy/

CPU INTEL accelerator (8 cores, intel compiler)

mpirun -np 8 lmp_intel -sf intel

0:03:04

CPU OPENMP accelerator (8 cores, intel compiler)

mpirun -np 8 lmp_intel -sf omp

0:03:33

CPU no accelerator (intel compiler)

mpirun -np 8 lmp_intel

0:04:12

GPU single precision (RTX 2060 super)

lmp_gpu_single -sf gpu 

0:01:05

GPU mixed precision (RTX 2060 super)

lmp_gpu_mixed -sf gpu 

0:01:10

GPU double precision (RTX 2060 super)

lmp_gpu_double -sf gpu 

0:05:37

CPU no accelerator (8 cores, gnu compiler)

mpirun -np 8 lmp_gpu_single

0:04:46

CPU 1 x 8 threads (8 cores, gnu compiler)

lmp_gpu_double -sf omp -pk omp 8

0:04:41

CPU 8 cores, OMP (8 cores, gnu compiler)

mpirun -np 8 lmp_gpu_double -sf omp

0:04:22

VASP

System:

72 atoms Cr, 3 x 3 x 3 k-sampling, 350 eV Ecut, 10 DAV iterations

Download the input files for VASP:
INCAR
KPOINTS
POSCAR
POTCAR

PC *A*
  • Intel Core i9-13900K
  • GeForce RTX 4090
  • RAM:
  • SSD:
  • WSL2 Ubuntu version: 22.04
PC *B*
  • Intel(R) Core(TM) i5-14600K 3.50 GHz
  • RTX 2060 SUPER
  • Kingston FURY 32GB KIT DDR5 6000MHz CL32 Renegade
  • WSL2 Ubuntu version: 22.04

Before running the simulations, the following should be exported for the corresponding compiled VASP version (if they are not set permanently in ~/.bashrc). In my case:

VASP compiled with GNU, HDF5:

#Standardly compiled OpenMPI (with default gcc)
export PATH=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi-5.0.0/build/bin:$PATH

export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi-5.0.0/build/lib:$LD_LIBRARY_PATH
#Standardly compiled HDF5 (with default gcc) export PATH=/home/lebedmi2/SOFTWARE/HDF5_GNU/myhdfstuff/build/HDF_Group/HDF5/1.14.3/bin:$PATH export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/HDF5_GNU/myhdfstuff/build/HDF_Group/HDF5/1.14.3/lib:$LD_LIBRARY_PATH

VASP compiled with Nvidia HPC SDK, MKL, HDF5:

#Intel compilers, libraries
source /opt/intel/oneapi/setvars.sh
#OpenMPI from HPC toolkit (cuda aware OpenMPI)
export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/23.11/comm_libs/12.3/hpcx/hpcx-2.16/ompi/bin:$PATH
export LD_LIBRARY_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/23.11/comm_libs/12.3/hpcx/hpcx-2.16/ompi/lib:$LD_LIBRARY_PATH

Or:

export PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/23.11/comm_libs/12.3/openmpi4/openmpi-4.1.5/bin:$PATH
export LD_LIBRARY_PATH=/opt/nvidia/hpc_sdk/Linux_x86_64/23.11/comm_libs/12.3/openmpi4/openmpi-4.1.5/bin:$LD_LIBRARY_PATH 
 
VASP compiled with AOCC, AOCL, HDF5:
#AOCC, AOCL
source /home/lebedmi2/SOFTWARE/ALL_VASP_AOCC/AOCC/setenv_AOCC.sh
source /home/lebedmi2/SOFTWARE/ALL_VASP_AOCC/AOCL/build/4.1.0/aocc/amd-libs.cfg
#OpenMPI AOCC
export PATH=/home/lebedmi2/SOFTWARE/ALL_VASP_AOCC/OpenMPI/OpenMPI_AOCC/openmpi-5.0.0/build/bin:$PATH
export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/ALL_VASP_AOCC/OpenMPI/OpenMPI_AOCC/openmpi-5.0.0/build/lib:$LD_LIBRARY_PATH
#HDF5 AOCC
export PATH=/home/lebedmi2/SOFTWARE/ALL_VASP_AOCC/HDF5/HDF5_AOCC/myhdfstuff/build/HDF_Group/HDF5/1.14.3/bin:$PATH
export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/ALL_VASP_AOCC/HDF5/HDF5_AOCC/myhdfstuff/build/HDF_Group/HDF5/1.14.3/lib:$LD_LIBRARY_PATH

VASP compiled with Intel oneAPI, MKL, HDF5:

#Intel compilers, libraries
source /opt/intel/oneapi/setvars.sh #OpenMPI compiled with oneAPI export PATH=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi_intel/openmpi-5.0.0/build/bin:$PATH export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi_intel/openmpi-5.0.0/build/lib:$LD_LIBRARY_PATH #HDF5 compiled with oneApi export PATH=/home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3/bin:$PATH export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3/lib:$LD_LIBRARY_PATH

Compiled version

Elapsed time (s)

*A* VASP GPU with MKL, 1 x GPU, 1 x 1 (thread x core)

209

*B* VASP GPU, 1 x GPU, 1 x 1 (thread x core)

*A* VASP OMP with MKL, 1 x 8 (thread x core)

462

*B* VASP OMP with MKL, 1 x 8 (thread x core)

*A* VASP OMP, 1 x 8 (thread x core)

585

*B* VASP OMP, 1 x 8 (thread x core)

*A* VASP OMP, 1 x 16 (thread x core)

513

*A* VASP OMP with MKL, 1 x 1 (thread x core)

2604

*B* VASP OMP with MKL, 1 x 1 (thread x core)

*B* VASP Intel with MKL, 1 x 1 (thread x core)

600

*A* VASP OMP, 1 x 1 (thread x core)

3319

*B* VASP OMP, 1 x 1 (thread x core)