large memory leak on scrubbing

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
We noticed some issues on CEPH/S3 cluster, I think it related with scrubbing: large memory leaks.

Logs 09.xx: https://www.dropbox.com/s/4z1fzg239j43igs/ceph-osd.4.log_09xx.tar.gz
>From 09.30 to 09.44 (14 minutes) osd.4 proces grows up to 28G. 

I think this is something curious:
2013-08-16 09:43:48.801331 7f6570d2e700  0 log [WRN] : slow request 32.794125 seconds old, received at 2013-08-16 09:43:16.007104: osd_sub_op(unknown.0.0:0 16.113d 0//0//-1 [scrub-reserve] v 0'0 snapset=0=[]:[] snapc=0=[]) v7 currently no flag points reached

We have large rgw index and a lot of large files than on this cluster.
ceph version 0.56.6 (95a0bda7f007a33b0dc7adf4b330778fa1e5d70c)
Setup: 
- 12 servers x 12 OSD 
- 3 mons
Default scrubbing settings.
Journal and filestore settings:
        journal aio = true
        filestore flush min = 0
        filestore flusher = false
        filestore fiemap = false
        filestore op threads = 4
        filestore queue max ops = 4096
        filestore queue max bytes = 10485760
        filestore queue committing max bytes = 10485760
        journal max write bytes = 10485760
        journal queue max bytes = 10485760
        ms dispatch throttle bytes = 10485760
        objecter infilght op bytes = 10485760

Is this a known bug in this version?
(Do you know some workaround to fix this?)

---
Regards
Dominik

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux