On 04/24/14 at 07:50am, Baoquan He wrote: > On 04/23/14 at 01:08pm, Vivek Goyal wrote: > > > > - bitmap size: used for 1st and 2nd bitmaps > > > - remains: can be used for the other works of makedumpfile (e.g. I/O buffer) > > > > > > pattern | bitmap size | remains > > > ----------------------------------------------+---------------+------------- > > > A. 100G memory with the too allocation bug | 12.8 MB | 17.2 MB > > > B. 100G memory with fixed makedumpfile | 6.4 MB | 23.6 MB > > > C. 200G memory with fixed makedumpfile | 12.8 MB | 17.2 MB > > > D. 300G memory with fixed makedumpfile | 19.2 MB | 10.8 MB > > > E. 400G memory with fixed makedumpfile | 24.0 MB | 6.0 MB > > > F. 500G memory with fixed makedumpfile | 24.0 MB | 6.0 MB > > > ... > > > > > > Baoquan got OOM in A pattern and didn't get it in B, so C must also > > > fail due to OOM. This is just what I wanted to say. > > > > ok, So here bitmap size is growing because we have not hit the 80% of > > available memory limit yet. But it gets limited at 24MB once we hit > > 80% limit. I think that's fine. That's what I was looking for. > > > > Now key question will remain is that is using 80% of free memory by > > bitmaps too much. Are other things happening in system which consume > > memory and because memory is not available OOM hits. If that's the > > case we probably need to lower the amount of memory allocated to > > bit maps. Say 70% or 60% or may be 50%. But this should be data driven. > > How about add anoter limit, say left memory safety limit, e.g 20M. If > the remaining memory which is 20% of free memory is bigger than 20M, 80% > can be taken to calculate the bitmap size. If smaller than 20M, we just > take (total memory - safety limit) for bitmap size. Oh, this is what Astushi suggested earlier in his comments.