> >> Thanks for your description, I understand that situation and > >> the nature of the problem. > >> > >> That is, the assumption that 20% of free memory is enough for > >> makedumpfile can be broken if free memory is too small. > >> If your machine has 200GB memory, OOM will happen even after fix > >> the too allocation bug. > > > >Why? In cyclic mode, shouldn't makedumpfile's memory usage be fixed > >and should not be dependent on amount of RAM present in the system? > > Strictly speaking, it's not fixed but just restricted by the safe > limit(80% of free memory) like below: > > - bitmap size: used for 1st and 2nd bitmaps > - remains: can be used for the other works of makedumpfile (e.g. I/O buffer) > > pattern | bitmap size | remains > ----------------------------------------------+---------------+------------- > A. 100G memory with the too allocation bug | 12.8 MB | 17.2 MB > B. 100G memory with fixed makedumpfile | 6.4 MB | 23.6 MB > C. 200G memory with fixed makedumpfile | 12.8 MB | 17.2 MB > D. 300G memory with fixed makedumpfile | 19.2 MB | 10.8 MB > E. 400G memory with fixed makedumpfile | 24.0 MB | 6.0 MB > F. 500G memory with fixed makedumpfile | 24.0 MB | 6.0 MB > ... > > Baoquan got OOM in A pattern and didn't get it in B, so C must also > fail due to OOM. This is just what I wanted to say. Thanks Atsushi for this detailed table. In fact in cyclic mode, the cyclic bugsize is the dynamic number which need be grieved, not the remaining memory which always got to be decreased. Since user set cyclic mode, they would like to accept the fact that dumping could cost a longer time. But we give them a OOM, that's not acceptable. So I think we need think about how the cyclic_bufsize can be calculated better. Maybe dynamically adjust the 80% number? Thanks Baoquan