Re: Possible memory leak in Ceph 14.0.1-1022.gc881d63

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




Hi Erwan,


Out of curiosity did you look at the mempool stats at all?  It's pretty likely you'll run out of memory with 512MB given our current defaults and the memory autotuner won't be able to keep up (it will do it's best, but can't work miracles).


As per the ceph-nano project, the cluster is very simple with the following configuration :

  cluster:
    id:     b2eafdd3-ec87-4107-afaf-521980bb3d9e
    health: HEALTH_OK

  services:
    mon: 1 daemons, quorum ceph-nano-travis-faa32aebf00b (age 2m)
    mgr: ceph-nano-travis-faa32aebf00b(active, since 2m)
    osd: 1 osds: 1 up (since 2m), 1 in (since 2m)
    rgw: 1 daemon active

  data:
    pools:   5 pools, 40 pgs
    objects: 174 objects, 1.6 KiB
    usage:   1.0 GiB used, 9.0 GiB / 10 GiB avail
    pgs:     40 active+clean


I'll save the mempool stats over time to see what is growing on the idle case.


[...]

In any event, when I've tested OSDs with that little memory there's been fairly dramatic performance impacts in a variety of ways depending on what you change.  In practice the minimum amount of memory we can reasonable work with right now is probably around 1.5-2GB, and we do a lot better with 3-4GB+.
In the ceph-nano context, we don't really target performance.



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux