I'm trying to run single-shot GW calculations and they keep crashing, with the following message near the end of the OUTCAR:
This job will probably crash, due to insufficient avaiable memory.
Available memory per mpi rank: 15046 MB, required memory: 3600 MB.
Reducing NTAUPAR or using more computing nodes might solve this
problem.
I set MAXMEM=3600 (as before it was 6800 by default and crashing), but it crashed again. What I don't understand is this warning message that I got; why does the job crash when required memory is much lower than the available memory per mpi rank? How can I set it so it doesn't crash?
I attach the OUTCAR, POSCAR, INCAR, job script, slurm output file.
How to set MAXMEM for GW calculations?
Moderators: Global Moderator, Moderator
-
- Newbie
- Posts: 9
- Joined: Wed Aug 04, 2021 8:23 am
How to set MAXMEM for GW calculations?
You do not have the required permissions to view the files attached to this post.
-
- Global Moderator
- Posts: 419
- Joined: Mon Sep 13, 2021 11:02 am
Re: How to set MAXMEM for GW calculations?
Hi,
Before I look into detail at your case, I was wondering if the problem reported at forum/viewtopic.php?f=4&t=18449&p=21697 ... mem#p21697 may be related to your case. Did you have a look?
Before I look into detail at your case, I was wondering if the problem reported at forum/viewtopic.php?f=4&t=18449&p=21697 ... mem#p21697 may be related to your case. Did you have a look?
-
- Administrator
- Posts: 282
- Joined: Mon Sep 24, 2018 9:39 am
Re: How to set MAXMEM for GW calculations?
You study a reasonably large system with the quartic scaling GW algorithm.
I strongly suggest to switch to the low-scaling GW implementation. Also, please update your vasp version from 6.1.0 to 6.3.2. There is a bug in 6.2.1 concerning the memory prediction for GW jobs as explained here.
In general, we propose following steps when encountering memory issues running GW jobs:
I strongly suggest to switch to the low-scaling GW implementation. Also, please update your vasp version from 6.1.0 to 6.3.2. There is a bug in 6.2.1 concerning the memory prediction for GW jobs as explained here.
In general, we propose following steps when encountering memory issues running GW jobs:
- reduce PREC from Accurate -> Normal (or even Single)
- for large cells (like yours) set LREAL=Auto in the INCAR. Note, the vasp code prints a corresponding warning in the stdout.
- reduce the number of k-points (you have already done that)
- use NTAUPAR=1 for low-scaling GW jobs (unnecessary with vasp-6.3.1 and newer, since NTAUPAR is set internally based on available memory).
Code: Select all
ISMEAR = 0
SIGMA = 0.05
LREAL = A
EDIFF = 1E-8
ISPIN = 2
LASPH=.TRUE.
NEDOS=5000
ALGO = G0W0R
NBANDS=1152