Compile LAMMPS with INTEL package on WSL2 Ubuntu

Compiled version: 2Aug2023 (stable)

Official documentation for the INTEL package:

Official documentation for building the INTEL package:

Official documentation for changing the compiler from GNU to Intel oneAPI compilers:

Tested PC:
  • Intel(R) Core(TM) i5-14600K 3.50 GHz
  • RTX 2060 SUPER
  • Kingston FURY 32GB KIT DDR5 6000MHz CL32 Renegade
  • WSL2 Ubuntu version: 22.04

Not all are necessary but might be useful:

sudo apt-get update
sudo apt-get upgrade

sudo apt-get install build-essential libtbb-dev cmake cmake-curses-gui libopenmpi-dev openmpi-bin libfftw3-dev libblas-dev liblapack-dev pkg-config ffmpeg python3-dev
sudo apt-get install python3-pip python3.10-venv python3-venv

For OpenMPI, the version within the oneAPI Base Toolkit will be used (see below).


LAMMPS INTEL Package installation:

Install Intel oneAPI:

oneAPI Base Toolkit

Download the oneAPI Base Toolkit (


Install it:

sudo sh ./

Continue with the installation according to the instructions (accept terms, recommended installation, ignore warnings about GUI and continue, skip Enclipse configuration).

Add MKL to PATH:

export PATH=/opt/intel/oneapi/mkl/2024.0:$PATH
export LD_LIBRARY_PATH=/opt/intel/oneapi/mkl/2024.0/lib/intel64:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=/opt/intel/oneapi/compiler/2024.0/lib:$LD_LIBRARY_PATH
Compile LAMMPS
Initialize oneAPI compilers and libraries with:
source /opt/intel/oneapi/

Make sure that mpi from the oneAPI is activate:

which mpirun

It should give something like ‚/opt/intel/oneapi/mpi/2021.11/bin/mpirun‘. If not, try to activate again, but now you will probably need to force the activation with:

source /opt/intel/oneapi/ --force

Download LAMMPS and create ‚build‘ directory:

git clone -b stable lammps_intel
cd lammps_intel
mkdir build
cd build

If necessary, modify the permissions of this folder to grant access to the current user. Replace ‚lebedmi2‘ with the user’s name, which can be determined using the ‚whoami‘ command. The ‚.‘ symbol represents the current directory:

sudo chown -R lebedmi2 .

Turn on the INTEL package and change the compilers to Intel oneAPI (Also turn on certain useful basic packages: ORIENT, PYTHON, OPENMP, MEAM, MANYBODY)

cmake \
-DCMAKE_CXX_COMPILER=/opt/intel/oneapi/compiler/2024.0/bin/icpx \
-DCMAKE_EXE_LINKER_FLAGS="-ltbbmalloc -lmkl_intel_ilp64 -lmkl_sequential -lmkl_core" \
-DCMAKE_CXX_FLAGS="-xHost -O2 -fp-model=fast -ansi-alias -qopenmp" \
-DCMAKE_SHARED_LINKER_FLAGS="-L/opt/intel/oneapi/mkl/2024.0/lib" \
make -j
make install
mv lmp lmp_intel

Export the given directory the PATH environment variables in /.bashrc.

Run LAMMPS with INTEL accelerator:

Before starting the simulation, make sure the intel mpi is set (either write the following into the last line of bashrc or locally write it directly into the console):

export PATH=/opt/intel/oneapi/mpi/2021.11/bin:$PATH
export LD_LIBRARY_PATH=opt/intel/oneapi/mpi/2021.11/lib:$LD_LIBRARY_PATH

Another way is to source the oneAPI toolkit, which will automatically activate the intel mpi:

source /opt/intel/oneapi/ --force

Run LAMMPS with intel package:

mpirun -np 8 lmp_intel -sf intel -in

Supported INTEL accelerated pair styles:

  • airebo, airebo/morse, buck/coul/cut, buck/coul/long, buck, dpd, eam, eam/alloy, eam/fs, gayberne, lj/charmm/coul/charmm, lj/charmm/coul/long, lj/cut, lj/cut/coul/long, lj/long/coul/long, rebo, sw, tersoff

Note: Compiling with turn ON GPU package using intel oneMPI compilers will probably not work due to the CUDA compiler not being compatible with ICPX (I was receiving errors). You will need to compile second LAMMPS version with GNU.


Some possible errors: 

This should not appear, but can be solved: from the oneAPI did not have GLIBCXX_3.4.29. However, GLIBCXX_3.4.29 was in /usr/lib/x86_64-linux-gnu/ (strings /usr/lib/x86_64-linux-gnu/ | grep GLIBCXX). So I had to create soft link pointing from oneAPI to this one:

Soft link:

sudo ln -sf /usr/lib/x86_64-linux-gnu/ /opt/intel/oneapi/vtune/2024.0/lib64/

In some case, I needed to add to the compiler flags „-msse4.1 -msse3“ to solve the following error:
src/INTEL/intel_intrinsics.h:1610:14: error: always_inline function ‚_mm_mullo_epi32‘ requires target feature ‚sse4.1‘, but would be inlined into function ‚int_mullo‘ that is compiled without support for ‚sse4.1‘

      return _mm_mullo_epi32(a, b);

The compiler flag „xHost“ solve the following error:
warning: Loop was not vectorized. Invalid SIMD region detected for given loop

Calculation speed comparison:

—–> HERE <—–