Thank you.
I will look into the script.
For fixed object size application(rbd,cephfs), do you think it is good idea
to pre-split the folders to the point when each folder contains about 1-2k
objects when the cluster is full. I think by doing this can avoid the performce
impact of splitting folder when clients are writing data into the cluster.
2018-04-08
shadow_lin
发件人:David Turner <drakonstein@xxxxxxxxx>
发送时间:2018-04-07 03:33
主题:Re: [ceph-users] Does jewel 10.2.10 support
filestore_split_rand_factor?
收件人:"shadow_lin"<shadow_lin@xxxxxxx>
抄送:"Pavan
Rallabhandi"<PRallabhandi@xxxxxxxxxxxxxxx>,"ceph-users"<ceph-users@xxxxxxxxxxxxxx>
You could randomize your ceph.conf settings
for filestore_merge_threshold and filestore_split_multiple.
It's not pretty, but it would spread things out. You could even do this
as granularly as you'd like down to the individual OSDs while only having a
single ceph.conf file to maintain.
I would probably go the route of manually splitting your subfolders,
though. I've been using this [1] script for some time to do just
that. I tried to make it fairly environment agnostic so people would
have an easier time implementing it for their needs.
Thanks.
Is there any workaround for 10.2.10 to avoid all osd start spliting at
the same time?
发送时间:2018-04-01 22:39
主题:Re: [ceph-users] Does jewel 10.2.10 support
filestore_split_rand_factor?
抄送:
No, it is supported in the next version of Jewel http://tracker.ceph.com/issues/22658
The document page of
jewel has filestore_split_rand_factor
config but I can't find the config by using 'ceph daemon osd.x
config'.
ceph version 10.2.10
(5dc1e4c05cb68dbf62ae6fce3f0700e4654fdbbe)
ceph daemon osd.0 config
show|grep split
"mon_osd_max_split_count": "32",
"journaler_allow_split_entries": "true", "mds_bal_split_size":
"10000", "mds_bal_split_rd":
"25000", "mds_bal_split_wr":
"10000", "mds_bal_split_bits":
"3",
"filestore_split_multiple": "4",
"filestore_debug_verify_split":
"false",
_______________________________________________ ceph-users
mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
|
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com