Hi Ben,
I previously hit this bug:
So I updated from libcurl 7.29.0-25 to the new update package libcurl 7.29.0-32 on RHEL 7, which fixed the deadlock problem.
I had not seen the issue you linked. It doesn't seem directly related, since my problem is a memory leak and not CPU. Clearly, though, older libcurl versions remain problematic for multiple reasons, so I'll give a newer one a try.
Thanks for the input!
-- Trey
On Fri, Oct 21, 2016 at 3:21 AM, Ben Morrice <ben.morrice@xxxxxxx> wrote:
What version of libcurl are you using?
I was hitting this bug with RHEL7/libcurl 7.29 which could also be your catalyst.
http://tracker.ceph.com/
issues/15915 Kind regards, Ben Morrice ____________________________________________________________ __________ Ben Morrice | e: ben.morrice@xxxxxxx | t: +41-21-693-9670 EPFL ENT CBS BBP Biotech Campus Chemin des Mines 9 1202 Geneva Switzerland On 20/10/16 21:41, Trey Palmer wrote:
I've been trying to test radosgw multisite and have a pretty bad memory leak. It appears to be associated only with multisite sync. Multisite works well for a small numbers of objects. However, it all fell over when I wrote in 8M 64K objects to two buckets overnight for testing (via cosbench). The leak appears to happen on the multisite transfer source -- that is, the node where the objects were written originally. The radosgw process eventually dies, I'm sure via the OOM killer, and systemd restarts it. Then repeat, though multisite sync pretty much stops at that point. I have tried 10.2.2, 10.2.3 and a combination of the two. I'm running on CentOS 7.2, using civetweb with SSL. I saw that the memory profiler only works on mon, osd and mds processes. Anyone else seen anything like this? -- Trey
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/ listinfo.cgi/ceph-users-ceph. com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com