On 8/7/19 1:40 PM, Robert LeBlanc wrote:
Maybe it's the lateness of the day, but I'm not sure how to do that.
Do you have an example where all the OSDs are of class ssd?
Can't parse what you mean. You always should paste your `ceph osd tree`
first.
Yes, we can set quotas to limit space usage (or number objects), but
you can not reserve some space that other pools can't use. The problem
is if we set a quota for the CephFS data pool to the equivalent of 95%
there are at least two scenario that make that quota useless.
Of course. 95% of CephFS deployments is where meta_pool on flash drives
with enough space for this.
```
pool 21 'fs_data' replicated size 3 min_size 2 crush_rule 4 object_hash
rjenkins pg_num 64 pgp_num 64 last_change 56870 flags hashpspool
stripe_width 0 application cephfs
pool 22 'fs_meta' replicated size 3 min_size 2 crush_rule 0 object_hash
rjenkins pg_num 16 pgp_num 16 last_change 56870 flags hashpspool
stripe_width 0 application cephfs
```
```
# ceph osd crush rule dump replicated_racks_nvme
{
"rule_id": 0,
"rule_name": "replicated_racks_nvme",
"ruleset": 0,
"type": 1,
"min_size": 1,
"max_size": 10,
"steps": [
{
"op": "take",
"item": -44,
"item_name": "default~nvme" <------------
},
{
"op": "chooseleaf_firstn",
"num": 0,
"type": "rack"
},
{
"op": "emit"
}
]
}
```
k
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com