Compile VASP using Intel oneAPI compilers

Compile VASP (6.4.1) within WSL2 Ubuntu using Intel compilers (oneAPI), supporting MKL accelerator and HDF5

Compiled version: VASP 6.4.1


Official documentation for compiling VASP:
Official documentation for makefiles:

Tested PC:
  • Intel(R) Core(TM) i5-14600K 3.50 GHz
  • RTX 2060 SUPER
  • Kingston FURY 32GB KIT DDR5 6000MHz CL32 Renegade
  • WSL2 Ubuntu version: 22.04
1) Compile Intel oneAPI Base Toolkit.
2) Compile Intel® HPC Toolkit.
3) Compile OpenMPI using Intel oneAPI compilers.
4) Compile HDF5 using Intel oneAPI compilers to support h5 output format from VASP (useful for example when post-processing data with py4vasp)
5) Compile VASP using Intel oneAPI compilers.
Not all are necessary but might be useful: 
sudo apt update
sudo apt upgrade

sudo apt-get install build-essential cmake cmake-curses-gui libopenmpi-dev openmpi-bin libfftw3-dev
Intel oneAPI Base Toolkit

Download the newest version from

sudo sh ./

Continue with the installation according to the instructions (accept terms, recommended installation, ignore warnings about GUI and continue, skip Enclipse configuration).

Activate the oneAPI with: 

source /opt/intel/oneapi/

Add MKL to the ~/.bashrc:

nano ~/.bashrc
export PATH=/opt/intel/oneapi/mkl/2024.0:$PATH
export LD_LIBRARY_PATH=/opt/intel/oneapi/mkl/2024.0/lib/intel64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/2024.0/lib:$LD_LIBRARY_PATH
Intel® HPC Toolkit

Download the newest version from

sudo sh ./

Activate the oneAPI with: 

source /opt/intel/oneapi/
OpenMPI compiled with oneAPI

Download OpenMPI (use wget command or download from here and compile:

mkdir OpenMPI_Intel
tar xvzf openmpi-5.0.0.tar.gz -C OpenMPI_Intel
cd OpenMPI_Intel/openmpi-5.0.0
mkdir build
./configure CC=icx CXX=icpx FC=ifort F77=ifort OMPI_CC=icx OMPI_CXX=icpx OMPI_FC=ifort OMPI_F77=ifort --prefix=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi_intel/openmpi-5.0.0/build
make install -j

Activate it (either write the following into the last line of bashrc for permanent activation or write it directly into the console for temporal activation):

export PATH=/home/lebedmi2/SOFTWARE/OpenMPI/OpenMPI_Intel/openmpi-5.0.0/build/bin:$PATH
export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/OpenMPI/OpenMPI_Intel/openmpi-5.0.0/build/lib:$LD_LIBRARY_PATH
HDF5 compiled with oneAPI

Create a new folder where HDF5 will be compiled:

mkdir HDF5_Intel
cd HDF5_Intel

Download into this folder Cmake version of HDF5 from

tar xvzf CMake-hdf5-1.14.3.tar.gz
cd CMake-hdf5-1.14.3/

Compile LIBAEC and ZLib downloaded with HDF5:

tar xvzf LIBAEC.tar.gz
tar xvzf ZLib.tar.gz
cd libaec-v1.0.6
mkdir build
cd build
cmake ..
sudo make install
cd ..
cd ..
cd zlib-1.3
mkdir build
cd build
cmake ..
sudo make install

Add to path:

export LD_LIBRARY_PATH=/usr/local/lib:$LD_LIBRARY_PATH
cd ..
cd ..

Copy hdf5-1.14.3 source to the HDF5_Intel dir:

cp -r hdf5-1.14.3/ ..

In HDF5_Intel dir, make a new directory called “myhdfstuff”. Without this called folder, compilation is problematic:

cd ..
mkdir myhdfstuff

Copy the source files in hdf5-1.14.3 to myhdfstuff:

cp -r hdf5-1.14.3/ myhdfstuff/

Build HDF5:

cd myhdfstuff
mkdir build
cd build
cmake --build . --config Release -j
cpack -C Release CPackConfig.cmake

Scroll down with enter, accept license with “y”, and write “n” to have it installed in the build directory.
Before running the simulations with this version, make sure to either permanently or temporally export HDF5 directory to system variables:

export PATH=/home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3/bin:$PATH
export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3/lib:$LD_LIBRARY_PATH
VASP compiled with oneAPI and MKL support
tar xvzf vasp.6.4.1.tgz
mv vasp.6.4.1 vasp.6.4.1_Intel_MKL
cd vasp.6.4.1_Intel_MKL
cd vasp.6.4.1

To run commands without sudo (otherwise you could face problems with paths, see below this section), give rights to the currect user for the vasp.6.4.1 folder (change only the username ‚lebedmi2‘ in the following command. You can check name of the user with command ‚whoami‚):

sudo chown -R lebedmi2 .

Create makefile.include file:

nano makefile.include

Into makefile.include, copy here from: makefile.include.intel_ompi_mkl_omp_acc ( or find it in arch folder of your VASP)..

In makefile.include, set the paths to to MKL (MKLROOT) and HDF5 (HDF5_ROOT). You must also change icc to icx and icpc to icpx. icc and icpc are depreceated and no longer included in newer versions of oneAPI.

MKLROOT    ?= /opt/intel/oneapi/mkl/2024.0
HDF5_ROOT ?= /home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3
CC_LIB = icx
CXX_PARS = icpx

Compile vasp:

make DEPS=1 -j

It will take some time. If no error appeared, check if all libraries are correctly set and none are missing:

cd bin
ldd vasp_std

Export number of threads to bashrc:


Try if the compiled version is working (write the following command in the same directory where the makefile.include is. It should give computed values and no errors):

make test -j 

If more than one vasp compilation will be installed on the computer, rename the binary:

mv vasp_std vasp_intel

Add path vasp_intel_mkl to bashrc:

export PATH=/home/lebedmi2/SOFTWARE/VASP/vasp.6.4.1_INTEL/vasp.6.4.1/bin:$PATH
Run the simulations with VASP Intel, MKL:

Before running the simulations, all the following should be exported (if they are not set permanently in ~/.bashrc). In my case:

#Intel compilers, libraries
source /opt/intel/oneapi/ #OpenMPI compiled with oneAPI export PATH=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi_intel/openmpi-5.0.0/build/bin:$PATH export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/OpenMPI/openmpi_intel/openmpi-5.0.0/build/lib:$LD_LIBRARY_PATH #HDF5 compiled with oneApi export PATH=/home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3/bin:$PATH export LD_LIBRARY_PATH=/home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3/lib:$LD_LIBRARY_PATH

Then run with, e.g.:

mpirun -np 4 vasp_intel 

To run on all processors and threads:

mpirun --use-hwthread -np N vasp_intel

This may not always be effective, as it depends on the simulation settings and size of the simulated system. For example, with 72 Cr atoms, 10 DAV iterations, 3x3x3 k-sampling, and 350 eV Ecut:

CommandElapsed time (s)
mpirun -np 1 vasp_intel
mpirun -np 1 vasp_aocc_aocl, AOCC, AOCL on Intel processor
mpirun -np 1 vasp_gcc_mkl, GNU with MKL
mpirun -np 1 vasp_gpu_mkl ,GPU RTX 2060 SUPER
mpirun -np 4 vasp_intel 293
 mpirun -np 7 vasp_intel 341
mpirun -np N vasp_intel (N = 10) 326
mpirun –use-hwthread -np N vasp_intel 251

The last command is running as: 20 mpi-ranks, with 2 threads/rank, on 1 nodes
distrk: each k-point on 20 cores, 1 groups

Encountered problems

When running the computation on smaller number of cores, I needed to solve the following error by writting into console:

ulimit -s unlimited

It removes the maximum size restriction on the stack memory for programs in a Unix-like system, allowing them to use as much stack space as needed.

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image PC Routine Line Source 00007F0D603C9520 Unknown Unknown Unknown
vasp_intel 000000000099B52F Unknown Unknown Unknown
vasp_intel 0000000001209389 Unknown Unknown Unknown
vasp_intel 00000000012A28BB Unknown Unknown Unknown
vasp_intel 0000000001DE5F26 Unknown Unknown Unknown
vasp_intel 0000000001DBD0EA Unknown Unknown Unknown
vasp_intel 000000000041CCED Unknown Unknown Unknown 00007F0D603B0D90 Unknown Unknown Unknown 00007F0D603B0E40 __libc_start_main Unknown Unknown
vasp_intel 000000000041CC05 Unknown Unknown Unknown

Solving possible problem with PATHs

In my ‚makefile‘ (it is in the same location where makefile.include is), I added the following at the first line to check which environmental paths are accessible during compilation:

@echo "Current PATH: $$PATH"

When executing ‚make‚, it shows the correct paths (including the path to nvfortran).
When executing ‚sudo make‚, it shows:
Current PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
These are pre-set secure PATHs used by sudo, not the ones I have set in ~/.bashrc needed for the compilation. The other way how to check the secure paths is to write ‚sudo env‚ and see the row starting with ‚PATH‘.

To have access to paths in ~/.bashrc, I would need to run the compilation without sudo, but doing so results in a ‚permission denied‘ error for some files.

You can address this by one of the following options (the first one is the most straightforward):

1) Change the ownership of the folder with VASP and all its content to the user to be able to run ‚make‘ without sudo:

sudo chown -R lebedmi ~/SOFTWARE/VASP/v.6.4.1/vasp.6.4.1

Change the ‚lebedmi‚ for your username (you can check it with command ‚whoami‘) and modify ‚~/SOFTWARE/VASP/v.6.4.1/vasp.6.4.1‚ to the extracted directory with your VASP. Then you can run the compilation as ‚make DEPS=1 -j4‚).


2) First remove preset /mnt/ folders from the environment variables by writingthe following command to the last line of bashrc (otherwise I was getting error: env: ‘Files/NVIDIA’: No such file or directory):

nano ~/.bashrc
export PATH=$(echo "$PATH" | tr ':' '\n' | grep -v '/mnt/c' | tr '\n' ':')

Save and exit, then

source ~/.bashrc

Now you can compile by running the sudo command with ‚env PATH=$PATH‘, e.g.:

sudo env PATH=$PATH make DEPS=1 -j4

3) Write ‚sudo visudo‘ and on the row starting with ‚Defaults secure_path=“…..‘ add the paths you need to have access when running command with ‚sudo‘


Calculation speed comparison

Content of makefile.include for VASP Intel MKL compilation:
#export MKLROOT=/opt/intel/oneapi/mkl/2024.0
# Default precompiler options
              -DMPI -DMPI_BLOCK=8000 -Duse_collective \
              -DscaLAPACK \
              -DCACHE_SIZE=4000 \
              -Davoidalloc \
              -Dvasp6 \
              -Duse_bse_te \
              -Dtbdyn \
              -Dfock_dblbuf \

CPP         = fpp -f_com=no -free -w0  $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)

FC          = mpif90 -qopenmp
FCL         = mpif90

FREE        = -free -names lowercase

FFLAGS      = -assume byterecl -w

OFLAG       = -O2
DEBUG       = -O0

OBJECTS     = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o

# For what used to be vasp.5.lib
CPP_LIB     = $(CPP)
FC_LIB      = $(FC)
CC_LIB      = icx

OBJECTS_LIB = linpack_double.o

# For the parser library
CXX_PARS    = icpx
LLIBS       = -lstdc++

## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...

# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
FFLAGS     += -xHOST

# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL        += -qmkl
MKLROOT    ?= /opt/intel/oneapi/mkl/2024.0
LLIBS      += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_openmpi_lp64
INCS        =-I$(MKLROOT)/include/fftw

# HDF5-support (optional but strongly recommended)
HDF5_ROOT  ?= /home/lebedmi2/SOFTWARE/HDF5_Intel/myhdfstuff/build/HDF_Group/HDF5/1.14.3
LLIBS      += -L$(HDF5_ROOT)/lib -lhdf5_fortran
INCS       += -I$(HDF5_ROOT)/include

# For the VASP-2-Wannier90 interface (optional)
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS          += -L$(WANNIER90_ROOT)/lib -lwannier

# For the fftlib library (experimental)
#CPP_OPTION += -Dsysv
#FCL         = mpif90 fftlib.o -qmkl
#INCS_FFTLIB = -I./include -I$(MKLROOT)/include/fftw
#LIBS       += fftlib