Re: dense storage nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



FWIW, we ran tests back in the dumpling era that more or less showed the same thing. Increasing the merge/split thresholds does help. We suspect it's primarily due to the PG splitting being spread out over a longer period of time so the effect lessens. We're looking at some options to introduce jitter after the threshold is hit so that PGs don't all split at exactly the same time.

Here are the old tests:

https://drive.google.com/open?id=0B2gTBZrkrnpZNTNicWwtT1NobUk

Mark

On 05/18/2016 09:31 PM, Kris Jurka wrote:


On 5/18/2016 7:15 PM, Christian Balzer wrote:

We have hit the following issues:

  - Filestore merge splits occur at ~40 MObjects with default settings.
This is a really, really bad couple of days while things settle.

Could you elaborate on that?
As in which settings affect this and what happens exactly as "merge
splits"
sounds like an oxymoron, so I suppose it's more of a split than a
merge to
be so painful?


Filestore merges directories when the leafs are largely empty and splits
when they're full.  So they're sort of the same thing.  Here's the
result of a test I ran storing objects into RGW as fast as possible and
you can see performance tank while directories split and recover
afterwards.

http://thread.gmane.org/gmane.comp.file-systems.ceph.user/27189/focus=27213

Kris Jurka
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux