Re: mem leaks in Bluestore?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Jun 27, 2016 at 10:11 AM, Igor Fedotov <ifedotov@xxxxxxxxxxxx> wrote:
>
> Hi All,
>
> let me share some observations I collected while running ceph_test_objectstore against the bluestore.
>
> Initially I started this investigations due to a failure in SyntheticMatrixCompressionAlgorithm test case. The issue appeared while running the the whole test suite and had a pretty odd symptom: failed test case was run with settings that aren't provided for it: compression = none, compression algorithm = snappy. No attempts to run for zlib despite the fact zlib is the first in the list. When running single test case everything is OK.
> Additional investigations led me to the fact that RAM at my VM decreases almost to zero while running the test suite and that probably prevents from setting desired config params. Hence I proceeded with mem leak investigation.
>
> Since Synthetic test cases are pretty complex I switched to less complex one - Many4KWriteNoCSumTest. As it performs writes only against the single object this eliminates other ops, compression, csum, multiple object handlings etc from under the suspicion.
> Currently I can see ~6Gb mem consumption when doing ~3000 random writes (up to 4K) over 4M object. Counting Bluestore's Buffer, Blob and Onode objects shows that they aren't grow unexpectedly over the time for the test case.
> Then I changed the test case to perform fixed length(64K) writes - mem consumption for 3000 writes reduced to 500M but I can see that Buffer count is permanently growing - one buffer per single write. Thus original issue is rather specific for small writes. But probably there is another issue with buffer cache for big ones.
> That's all what I have so far.
>
> Any comments/ideas are appreciated.
>

Igor,

Today both clang and gcc nowadays have a "performant" address
sanitizer that supports leak detection.
https://github.com/google/sanitizers/wiki/AddressSanitizerLeakSanitizer
I would try using that first. It's just so much more more performant
than Valgrind, something like a magnitude.



-- 
Milosz Tanski
CTO
16 East 34th Street, 15th floor
New York, NY 10016

p: 646-253-9055
e: milosz@xxxxxxxxx
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux