Try this?
https://github.com/ceph/ceph/pull/12634
Looks like this is most likely reducing the memory usage and increasing
performance quite a bit with smaller shard target/max values. With
25/50 I'm seeing more like 2.6GB RSS memory usage and around 13K iops
typically with some (likely rocksdb) stalls. I'll run through the tests
again.
Mark
Ok, Ran through tests with both 4k and 16k min_alloc/max_alloc/blob
sizes using master+12629+12634:
https://drive.google.com/uc?export=download&id=0B2gTBZrkrnpZQzdRU3B1SGZUbDQ
Performance is up in all tests and memory consumption is down
(especially in the smaller target/max tests). It looks like 100/200 is
probably the current optimal configuration on my test setup. 4K
min_alloc tests hover around 22.5K IOPS with ~1300% CPU usage, and 16K
min_alloc tests hover around 25K IOPs with ~1000% CPU usage. I think it
will be worth spending some time looking at locking in the bitmap
allocator given the perf traces. Beyond that, I'm seeing rocksdb show
up quite a bit in the top CPU consuming functions now, especially CRC32.
Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html