Re: Does jewel 10.2.10 support filestore_split_rand_factor?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I believe that Luminous has an ability like this as you can specify how many objects you anticipate a pool to have when you create it.  However, if you're creating pools in Luminous, you're probably using bluestore.  For Jewel and before, pre-splitting PGs doesn't help as much as you'd think.  As soon as a PG moves to a new OSD due to a lost drive or added storage, it rebuilds the PG at the new location with the current filestore subfolder settings and undoes your pre-splitting.

On Sun, Apr 8, 2018 at 1:59 AM shadow_lin <shadow_lin@xxxxxxx> wrote:
Thank you.
I will look into the script.
For fixed object size application(rbd,cephfs), do you think it is good idea to pre-split the folders to the point when each folder contains about 1-2k  objects when the cluster is full. I think by doing this can avoid the performce impact of splitting folder when clients are writing data into the cluster.
2018-04-08
shadow_lin

发件人:David Turner <drakonstein@xxxxxxxxx>
发送时间:2018-04-07 03:33
主题:Re: Does jewel 10.2.10 support filestore_split_rand_factor?
收件人:"shadow_lin"<shadow_lin@xxxxxxx>
抄送:"Pavan Rallabhandi"<PRallabhandi@xxxxxxxxxxxxxxx>,"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
 
You could randomize your ceph.conf settings for filestore_merge_threshold and filestore_split_multiple.  It's not pretty, but it would spread things out.  You could even do this as granularly as you'd like down to the individual OSDs while only having a single ceph.conf file to maintain.

I would probably go the route of manually splitting your subfolders, though.  I've been using this [1] script for some time to do just that.  I tried to make it fairly environment agnostic so people would have an easier time implementing it for their needs.


On Sun, Apr 1, 2018 at 10:42 AM shadow_lin <shadow_lin@xxxxxxx> wrote:
Thanks.
Is there any workaround for 10.2.10 to avoid all osd start spliting at the same time?
 
2018-04-01
shadowlin
 

发件人:Pavan Rallabhandi <PRallabhandi@xxxxxxxxxxxxxxx>
发送时间:2018-04-01 22:39
主题:Re: Does jewel 10.2.10 support filestore_split_rand_factor?
收件人:"shadow_lin"<shadow_lin@xxxxxxx>,"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
抄送:
 

No, it is supported in the next version of Jewel http://tracker.ceph.com/issues/22658

 

From: ceph-users <ceph-users-bounces@xxxxxxxxxxxxxx> on behalf of shadow_lin <shadow_lin@xxxxxxx>
Date: Sunday, April 1, 2018 at 3:53 AM
To: ceph-users <ceph-users@xxxxxxxxxxxxxx>
Subject: EXT: Does jewel 10.2.10 support filestore_split_rand_factor?

 

The document page of jewel has filestore_split_rand_factor config but I can't find the config by using 'ceph daemon osd.x config'.

 

ceph version 10.2.10 (5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)

ceph daemon osd.0 config show|grep split
    "mon_osd_max_split_count": "32",
    "journaler_allow_split_entries": "true",
    "mds_bal_split_size": "10000",
    "mds_bal_split_rd": "25000",
    "mds_bal_split_wr": "10000",
    "mds_bal_split_bits": "3",
    "filestore_split_multiple": "4",
    "filestore_debug_verify_split": "false",

 

2018-04-01


shadow_lin

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux