ALGO = CHI fails for graphene
Moderators: Global Moderator, Moderator
-
- Newbie
- Posts: 21
- Joined: Tue Sep 15, 2020 3:36 pm
ALGO = CHI fails for graphene
Hello
I am trying to calculate optical properties at the RPA level for graphene (i.e., using ALGO = CHI) with a unit cell of 32 C atoms. I have done an energy calculation followed by ALGO =EXACT before the ALGO = CHI. I am using the ls6 super computer and the vasp.mpi file is as follows:
login2.ls6(1186)$ more vasp.mpi
#!/bin/bash
#SBATCH -J vasp
#SBATCH -o vasp.%j.out
#SBATCH -e vasp.%j.err
#SBATCH -n 640
#SBATCH -N 20
#SBATCH -p normal
#SBATCH -t 1:00:00
#SBATCH -A CHE21028
#moldule load intel/18.0.2
module load intel/19.1.1
module load mvapich2
#module load impi/19.0.9
#module load cray_mpich/7.7.3
#module swap intel intel/17.0.4
#module load vasp/6.1.2
module load vasp/6.3.0
#module load vasp/5.4.4
ibrun vasp_std > vasp_test.out
20 nodes provides total shared memory of 20 x 256 GB ~ 5.1 TB of RAM. The KPOINTS is as follows
Auto
0
G
2 2 1
0.0 0.0 0.0
Vasp excepts memory as follows
total amount of memory used by VASP MPI-rank0 4782397. kBytes
=======================================================================
base : 30000. kBytes
nonlr-proj: 4428. kBytes
fftplans : 1219. kBytes
grid : 7411. kBytes
one-center: 98. kBytes
HF : 9. kBytes
wavefun : 2865. kBytes
response : 4736367. kBytes
This means that VASP needs about 3 TB of RAM, which exceeds the memory available.
However vasp dies after NQ= 1 0.0000 0.0000 0.0000
I am attaching the INCAR, KPOINTS, and OUTCAR files.
Thanks-Nick
I am trying to calculate optical properties at the RPA level for graphene (i.e., using ALGO = CHI) with a unit cell of 32 C atoms. I have done an energy calculation followed by ALGO =EXACT before the ALGO = CHI. I am using the ls6 super computer and the vasp.mpi file is as follows:
login2.ls6(1186)$ more vasp.mpi
#!/bin/bash
#SBATCH -J vasp
#SBATCH -o vasp.%j.out
#SBATCH -e vasp.%j.err
#SBATCH -n 640
#SBATCH -N 20
#SBATCH -p normal
#SBATCH -t 1:00:00
#SBATCH -A CHE21028
#moldule load intel/18.0.2
module load intel/19.1.1
module load mvapich2
#module load impi/19.0.9
#module load cray_mpich/7.7.3
#module swap intel intel/17.0.4
#module load vasp/6.1.2
module load vasp/6.3.0
#module load vasp/5.4.4
ibrun vasp_std > vasp_test.out
20 nodes provides total shared memory of 20 x 256 GB ~ 5.1 TB of RAM. The KPOINTS is as follows
Auto
0
G
2 2 1
0.0 0.0 0.0
Vasp excepts memory as follows
total amount of memory used by VASP MPI-rank0 4782397. kBytes
=======================================================================
base : 30000. kBytes
nonlr-proj: 4428. kBytes
fftplans : 1219. kBytes
grid : 7411. kBytes
one-center: 98. kBytes
HF : 9. kBytes
wavefun : 2865. kBytes
response : 4736367. kBytes
This means that VASP needs about 3 TB of RAM, which exceeds the memory available.
However vasp dies after NQ= 1 0.0000 0.0000 0.0000
I am attaching the INCAR, KPOINTS, and OUTCAR files.
Thanks-Nick
You do not have the required permissions to view the files attached to this post.
-
- Global Moderator
- Posts: 460
- Joined: Mon Nov 04, 2019 12:44 pm
Re: ALGO = CHI fails for graphene
Please also send the POSCAR and POTCAR files used in this example.
-
- Newbie
- Posts: 21
- Joined: Tue Sep 15, 2020 3:36 pm
Re: ALGO = CHI fails for graphene
Please see attachment that now contains INCAR, POSCAR, POTCAR, KPOINTS, and OUTCAR files.
Thank you,
Nick
Thank you,
Nick
You do not have the required permissions to view the files attached to this post.
-
- Newbie
- Posts: 21
- Joined: Tue Sep 15, 2020 3:36 pm
Re: ALGO = CHI fails for graphene
Correction to my original post
"This means that VASP needs about 3 TB of RAM, which does not exceed the memory available".
Sorry for the confusion.
PS. I did repeat the calculations with KPOINT file as
login1.ls6(1003)$ more KPOINTS
Auto
0
G
1 1 1
0.0 0.0 0.0
and still I have the same problem.
Thanks-Nick
"This means that VASP needs about 3 TB of RAM, which does not exceed the memory available".
Sorry for the confusion.
PS. I did repeat the calculations with KPOINT file as
login1.ls6(1003)$ more KPOINTS
Auto
0
G
1 1 1
0.0 0.0 0.0
and still I have the same problem.
Thanks-Nick
-
- Global Moderator
- Posts: 460
- Joined: Mon Nov 04, 2019 12:44 pm
Re: ALGO = CHI fails for graphene
The "total amount of memory used by VASP MPI-rank0" doesn't output the total amount of memory allocated, so don't use this estimation.
The old GW/RPA (and also CHI) only output the used memory after allocation, so unfortunately there is no way in these routines in advance how much memory you will need.
Concerning your calculation you can reduce the used memory (possibly significantly) by using shared memory.
For this you have to compile with at least the precompiler flag -Duse_shmem. Please look at the shared memory page for that:
https://www.vasp.at/wiki/index.php/Shared_memory
After that you have to run the code using NCSHMEM=X, where X defines the number of cores on a node sharing the memory. Please also have a look at it's documentation:
https://www.vasp.at/wiki/index.php/NCSHMEM
In the example you sent you used 20 nodes with 32 cores. Using NCSHMEM=32 will possibly result in a factor of 32 less memory needed if you have the CPUs on 1 socket. On 2 sockets it should be 16.
For larger cases the use of NCSHMEM is almost mandatory with the old GW/RPA algorithms. Your case is also quite big because you have a vacuum.
I hope the calculation will fit into memory this way.
Another thing:
You use IVDW=11 in your calculation. It's completely unnecessary to use it since you are calculating raphene and not graphite. So you don't need the interactions between the layers.
The old GW/RPA (and also CHI) only output the used memory after allocation, so unfortunately there is no way in these routines in advance how much memory you will need.
Concerning your calculation you can reduce the used memory (possibly significantly) by using shared memory.
For this you have to compile with at least the precompiler flag -Duse_shmem. Please look at the shared memory page for that:
https://www.vasp.at/wiki/index.php/Shared_memory
After that you have to run the code using NCSHMEM=X, where X defines the number of cores on a node sharing the memory. Please also have a look at it's documentation:
https://www.vasp.at/wiki/index.php/NCSHMEM
In the example you sent you used 20 nodes with 32 cores. Using NCSHMEM=32 will possibly result in a factor of 32 less memory needed if you have the CPUs on 1 socket. On 2 sockets it should be 16.
For larger cases the use of NCSHMEM is almost mandatory with the old GW/RPA algorithms. Your case is also quite big because you have a vacuum.
I hope the calculation will fit into memory this way.
Another thing:
You use IVDW=11 in your calculation. It's completely unnecessary to use it since you are calculating raphene and not graphite. So you don't need the interactions between the layers.
-
- Newbie
- Posts: 21
- Joined: Tue Sep 15, 2020 3:36 pm
Re: ALGO = CHI fails for graphene
Thank you for your reply. I tried to compile VASP 6.3.2 at lonestar 6 supercomputer using the attached makefile.include and the output.txt file from the compilation.
The compilation crashes as shown below
/opt/apps/gcc/9.4.0/bin/ld: mpi.o: in function `mpimy_mp_m_divide_shmem_':
mpi.f90:(.text+0x3288): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x32a6): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3377): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3496): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.o: in function `mpimy_mp_m_divide_intra_inter_node_':
mpi.f90:(.text+0x3efe): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3f1c): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3ff0): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x43a3): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.o: in function `mpi_data_mp_m_divide_intra_inter_node_':
ml_ff_mpi.f90:(.text+0x10c): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.f90:(.text+0x12a): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.f90:(.text+0x33a): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.f90:(.text+0x34a): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.o: in function `wave_mp_newwav_shmem_':
wave.f90:(.text+0xb451): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xb596): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xb6f4): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xb71a): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.o: in function `wave_mp_delwav_shmem_':
wave.f90:(.text+0xc077): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xc133): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.o: in function `auger_mp_allocate_wavefun_shmem_':
auger.f90:(.text+0x68c): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0x735): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0x814): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0x8e6): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.o: in function `auger_mp_deallocate_wavefun_shmem_':
auger.f90:(.text+0xaa5): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0xac5): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0xae8): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0xb08): undefined reference to `destroyshmem_C'
make[2]: *** [makefile:132: vasp] Error 1
cp: cannot stat 'vasp': No such file or directory
make[1]: *** [makefile:129: all] Error 1
make: *** [makefile:17: std] Error 2
If the option -Duse_shmem vasp compiles with no problems. This compilation has been hacked by TACC staff.
Thanks-Nick
The compilation crashes as shown below
/opt/apps/gcc/9.4.0/bin/ld: mpi.o: in function `mpimy_mp_m_divide_shmem_':
mpi.f90:(.text+0x3288): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x32a6): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3377): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3496): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.o: in function `mpimy_mp_m_divide_intra_inter_node_':
mpi.f90:(.text+0x3efe): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3f1c): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x3ff0): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: mpi.f90:(.text+0x43a3): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.o: in function `mpi_data_mp_m_divide_intra_inter_node_':
ml_ff_mpi.f90:(.text+0x10c): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.f90:(.text+0x12a): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.f90:(.text+0x33a): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: ml_ff_mpi.f90:(.text+0x34a): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.o: in function `wave_mp_newwav_shmem_':
wave.f90:(.text+0xb451): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xb596): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xb6f4): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xb71a): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.o: in function `wave_mp_delwav_shmem_':
wave.f90:(.text+0xc077): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: wave.f90:(.text+0xc133): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.o: in function `auger_mp_allocate_wavefun_shmem_':
auger.f90:(.text+0x68c): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0x735): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0x814): undefined reference to `getshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0x8e6): undefined reference to `attachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.o: in function `auger_mp_deallocate_wavefun_shmem_':
auger.f90:(.text+0xaa5): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0xac5): undefined reference to `destroyshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0xae8): undefined reference to `detachshmem_C'
/opt/apps/gcc/9.4.0/bin/ld: auger.f90:(.text+0xb08): undefined reference to `destroyshmem_C'
make[2]: *** [makefile:132: vasp] Error 1
cp: cannot stat 'vasp': No such file or directory
make[1]: *** [makefile:129: all] Error 1
make: *** [makefile:17: std] Error 2
If the option -Duse_shmem vasp compiles with no problems. This compilation has been hacked by TACC staff.
Thanks-Nick
You do not have the required permissions to view the files attached to this post.
-
- Newbie
- Posts: 21
- Joined: Tue Sep 15, 2020 3:36 pm
Re: ALGO = CHI fails for graphene
Correction on the last statement. It should read "If the option -Duse_shmem vasp is absent then VASP compiles with no problems. This compilation has been checked by TACC staff."
Thank you-Nick
Thank you-Nick
-
- Hero Member
- Posts: 586
- Joined: Tue Nov 16, 2004 2:21 pm
- License Nr.: 5-67
- Location: Germany
Re: ALGO = CHI fails for graphene
Hi there,
I'm having exactly the same issues with AMD and intel compiler 2022 (even the error messages are the same):
Any news so far?
Thanks
alex
PS: the error log
ar: creating libdmy.a
ar: creating libparser.a
ld: mpi.o: in function `mpimy_mp_m_divide_shmem_':
mpi.f90:(.text+0x3617): undefined reference to `getshmem_C'
ld: mpi.f90:(.text+0x3636): undefined reference to `attachshmem_C'
ld: mpi.f90:(.text+0x370a): undefined reference to `detachshmem_C'
ld: mpi.f90:(.text+0x3826): undefined reference to `destroyshmem_C'
ld: mpi.o: in function `mpimy_mp_m_divide_intra_inter_node_':
mpi.f90:(.text+0x433e): undefined reference to `getshmem_C'
ld: mpi.f90:(.text+0x435d): undefined reference to `attachshmem_C'
ld: mpi.f90:(.text+0x4433): undefined reference to `detachshmem_C'
ld: mpi.f90:(.text+0x480a): undefined reference to `destroyshmem_C'
ld: ml_ff_mpi.o: in function `mpi_data_mp_m_divide_intra_inter_node_':
ml_ff_mpi.f90:(.text+0x10c): undefined reference to `getshmem_C'
ld: ml_ff_mpi.f90:(.text+0x12b): undefined reference to `attachshmem_C'
ld: ml_ff_mpi.f90:(.text+0x33d): undefined reference to `detachshmem_C'
ld: ml_ff_mpi.f90:(.text+0x34d): undefined reference to `destroyshmem_C'
ld: wave.o: in function `wave_mp_newwav_shmem_':
wave.f90:(.text+0xbac1): undefined reference to `attachshmem_C'
ld: wave.f90:(.text+0xbc06): undefined reference to `attachshmem_C'
ld: wave.f90:(.text+0xbd64): undefined reference to `getshmem_C'
ld: wave.f90:(.text+0xbd8a): undefined reference to `getshmem_C'
ld: wave.o: in function `wave_mp_delwav_shmem_':
wave.f90:(.text+0xc817): undefined reference to `detachshmem_C'
ld: wave.f90:(.text+0xc8f9): undefined reference to `destroyshmem_C'
ld: auger.o: in function `auger_mp_allocate_wavefun_shmem_':
auger.f90:(.text+0x9ac): undefined reference to `getshmem_C'
ld: auger.f90:(.text+0xa55): undefined reference to `attachshmem_C'
ld: auger.f90:(.text+0xb34): undefined reference to `getshmem_C'
ld: auger.f90:(.text+0xc06): undefined reference to `attachshmem_C'
ld: auger.o: in function `auger_mp_deallocate_wavefun_shmem_':
auger.f90:(.text+0xdc4): undefined reference to `detachshmem_C'
ld: auger.f90:(.text+0xde3): undefined reference to `destroyshmem_C'
ld: auger.f90:(.text+0xe05): undefined reference to `detachshmem_C'
ld: auger.f90:(.text+0xe24): undefined reference to `destroyshmem_C'
make[2]: *** [makefile:132: vasp] Error 1
cp: cannot stat 'vasp': No such file or directory
make[1]: *** [makefile:129: all] Error 1
make: *** [makefile:17: std] Error 2
PPS:
[vasp.6.3.2_use_shmem]$ cat makefile.include
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxIFC\" \
-DMPI -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Duse_shmem \
-Dfock_dblbuf
CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpiifort
FCL = mpiifort
FREE = -free -names lowercase
FFLAGS = -assume byterecl -w
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB = linpack_double.o
# For the parser library
CXX_PARS = icpc
LLIBS = -lstdc++
##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -xHOST
FFLAGS += $(VASP_TARGET_CPU)
FFLAGS += -march=core-avx2
# Intel MKL (FFTW, BLAS, LAPACK, and scaLAPACK)
# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL += -qmkl=sequential
MKLROOT ?= /path/to/your/mkl/installation
LLIBS += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
INCS =-I$(MKLROOT)/include/fftw
# HDF5-support (optional but strongly recommended)
# module load HDF5/1.13.1-iimpi-2022a
# CPP_OPTIONS+= -DVASP_HDF5
# HDF5_ROOT ?= /dodrio/apps/RHEL8/zen2-ib/software/HDF5/1.13.1-gompi-2022a
# LLIBS += -L$(HDF5_ROOT)/lib -lhdf5_fortran
# INCS += -I$(HDF5_ROOT)/include
# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS += -L$(WANNIER90_ROOT)/lib -lwannier
I'm having exactly the same issues with AMD and intel compiler 2022 (even the error messages are the same):
Any news so far?
Thanks
alex
PS: the error log
ar: creating libdmy.a
ar: creating libparser.a
ld: mpi.o: in function `mpimy_mp_m_divide_shmem_':
mpi.f90:(.text+0x3617): undefined reference to `getshmem_C'
ld: mpi.f90:(.text+0x3636): undefined reference to `attachshmem_C'
ld: mpi.f90:(.text+0x370a): undefined reference to `detachshmem_C'
ld: mpi.f90:(.text+0x3826): undefined reference to `destroyshmem_C'
ld: mpi.o: in function `mpimy_mp_m_divide_intra_inter_node_':
mpi.f90:(.text+0x433e): undefined reference to `getshmem_C'
ld: mpi.f90:(.text+0x435d): undefined reference to `attachshmem_C'
ld: mpi.f90:(.text+0x4433): undefined reference to `detachshmem_C'
ld: mpi.f90:(.text+0x480a): undefined reference to `destroyshmem_C'
ld: ml_ff_mpi.o: in function `mpi_data_mp_m_divide_intra_inter_node_':
ml_ff_mpi.f90:(.text+0x10c): undefined reference to `getshmem_C'
ld: ml_ff_mpi.f90:(.text+0x12b): undefined reference to `attachshmem_C'
ld: ml_ff_mpi.f90:(.text+0x33d): undefined reference to `detachshmem_C'
ld: ml_ff_mpi.f90:(.text+0x34d): undefined reference to `destroyshmem_C'
ld: wave.o: in function `wave_mp_newwav_shmem_':
wave.f90:(.text+0xbac1): undefined reference to `attachshmem_C'
ld: wave.f90:(.text+0xbc06): undefined reference to `attachshmem_C'
ld: wave.f90:(.text+0xbd64): undefined reference to `getshmem_C'
ld: wave.f90:(.text+0xbd8a): undefined reference to `getshmem_C'
ld: wave.o: in function `wave_mp_delwav_shmem_':
wave.f90:(.text+0xc817): undefined reference to `detachshmem_C'
ld: wave.f90:(.text+0xc8f9): undefined reference to `destroyshmem_C'
ld: auger.o: in function `auger_mp_allocate_wavefun_shmem_':
auger.f90:(.text+0x9ac): undefined reference to `getshmem_C'
ld: auger.f90:(.text+0xa55): undefined reference to `attachshmem_C'
ld: auger.f90:(.text+0xb34): undefined reference to `getshmem_C'
ld: auger.f90:(.text+0xc06): undefined reference to `attachshmem_C'
ld: auger.o: in function `auger_mp_deallocate_wavefun_shmem_':
auger.f90:(.text+0xdc4): undefined reference to `detachshmem_C'
ld: auger.f90:(.text+0xde3): undefined reference to `destroyshmem_C'
ld: auger.f90:(.text+0xe05): undefined reference to `detachshmem_C'
ld: auger.f90:(.text+0xe24): undefined reference to `destroyshmem_C'
make[2]: *** [makefile:132: vasp] Error 1
cp: cannot stat 'vasp': No such file or directory
make[1]: *** [makefile:129: all] Error 1
make: *** [makefile:17: std] Error 2
PPS:
[vasp.6.3.2_use_shmem]$ cat makefile.include
# Default precompiler options
CPP_OPTIONS = -DHOST=\"LinuxIFC\" \
-DMPI -DMPI_BLOCK=8000 -Duse_collective \
-DscaLAPACK \
-DCACHE_SIZE=4000 \
-Davoidalloc \
-Dvasp6 \
-Duse_bse_te \
-Dtbdyn \
-Duse_shmem \
-Dfock_dblbuf
CPP = fpp -f_com=no -free -w0 $*$(FUFFIX) $*$(SUFFIX) $(CPP_OPTIONS)
FC = mpiifort
FCL = mpiifort
FREE = -free -names lowercase
FFLAGS = -assume byterecl -w
OFLAG = -O2
OFLAG_IN = $(OFLAG)
DEBUG = -O0
OBJECTS = fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o
OBJECTS_O1 += fftw3d.o fftmpi.o fftmpiw.o
OBJECTS_O2 += fft3dlib.o
# For what used to be vasp.5.lib
CPP_LIB = $(CPP)
FC_LIB = $(FC)
CC_LIB = icc
CFLAGS_LIB = -O
FFLAGS_LIB = -O1
FREE_LIB = $(FREE)
OBJECTS_LIB = linpack_double.o
# For the parser library
CXX_PARS = icpc
LLIBS = -lstdc++
##
## Customize as of this point! Of course you may change the preceding
## part of this file as well if you like, but it should rarely be
## necessary ...
##
# When compiling on the target machine itself, change this to the
# relevant target when cross-compiling for another architecture
VASP_TARGET_CPU ?= -xHOST
FFLAGS += $(VASP_TARGET_CPU)
FFLAGS += -march=core-avx2
# Intel MKL (FFTW, BLAS, LAPACK, and scaLAPACK)
# (Note: for Intel Parallel Studio's MKL use -mkl instead of -qmkl)
FCL += -qmkl=sequential
MKLROOT ?= /path/to/your/mkl/installation
LLIBS += -L$(MKLROOT)/lib/intel64 -lmkl_scalapack_lp64 -lmkl_blacs_intelmpi_lp64
INCS =-I$(MKLROOT)/include/fftw
# HDF5-support (optional but strongly recommended)
# module load HDF5/1.13.1-iimpi-2022a
# CPP_OPTIONS+= -DVASP_HDF5
# HDF5_ROOT ?= /dodrio/apps/RHEL8/zen2-ib/software/HDF5/1.13.1-gompi-2022a
# LLIBS += -L$(HDF5_ROOT)/lib -lhdf5_fortran
# INCS += -I$(HDF5_ROOT)/include
# For the VASP-2-Wannier90 interface (optional)
#CPP_OPTIONS += -DVASP2WANNIER90
#WANNIER90_ROOT ?= /path/to/your/wannier90/installation
#LLIBS += -L$(WANNIER90_ROOT)/lib -lwannier