Sorry, late to the party here. I agree, up the merge and split thresholds. We’re as high as 50/12. I chimed in on an RH ticket here. One of those things you just have to find out as an operator since it’s not well documented :(
We have over 200 million objects in this cluster, and it’s still doing over 15000 write IOPS all day long with 302 spinning drives + SATA SSD journals. Having enough memory and dropping your vfs_cache_pressure should also help.
Keep in mind that if you change the values, it won’t take effect immediately. It only merges them back if the directory is under the calculated threshold and a write occurs (maybe a read, I forget).
Warren
From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of Wade Holler <wade.holler@xxxxxxxxx>
Date: Monday, June 20, 2016 at 2:48 PM To: Blair Bethwaite <blair.bethwaite@xxxxxxxxx>, Wido den Hollander <wido@xxxxxxxx> Cc: Ceph Development <ceph-devel@xxxxxxxxxxxxxxx>, "ceph-users@xxxxxxxxxxxxxx" <ceph-users@xxxxxxxxxxxxxx> Subject: Re: Dramatic performance drop at certain number of objects in pool Thanks everyone for your replies. I sincerely appreciate it. We are testing with different pg_num and filestore_split_multiple settings. Early indications are .... well not great. Regardless it is nice to understand the symptoms better so we
try to design around it.
Best Regards,
Wade
On Mon, Jun 20, 2016 at 2:32 AM Blair Bethwaite <blair.bethwaite@xxxxxxxxx> wrote:
On 20 June 2016 at 09:21, Blair Bethwaite <blair.bethwaite@xxxxxxxxx> wrote:
This email and any files transmitted with it are confidential and intended solely for the individual or entity to whom they are addressed. If you have received this email in error destroy it immediately. *** Walmart Confidential ***
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com