Thank you very much Paul.
Kevin
Am Do., 20. Sep. 2018 um 15:19 Uhr schrieb Paul Emmerich <paul.emmerich@xxxxxxxx>:
Hi,
device classes are internally represented as completely independent
trees/roots; showing them in one tree is just syntactic sugar.
For example, if you have a hierarchy like root --> host1, host2, host3
--> nvme/ssd/sata OSDs, then you'll actually have 3 trees:
root~ssd -> host1~ssd, host2~ssd ...
root~sata -> host~sata, ...
Paul
2018-09-20 14:54 GMT+02:00 Kevin Olbrich <ko@xxxxxxx>:
> Hi!
>
> Currently I have a cluster with four hosts and 4x HDDs + 4 SSDs per host.
> I also have replication rules to distinguish between HDD and SSD (and
> failure-domain set to rack) which are mapped to pools.
>
> What happens if I add a heterogeneous host with 1x SSD and 1x NVMe (where
> NVMe will be a new device-class based rule)?
>
> Will the crush weight be calculated from the OSDs up to the failure-domain
> based on the crush rule?
> The only crush-weights I know and see are those shown by "ceph osd tree".
>
> Kind regards
> Kevin
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
--
Paul Emmerich
Looking for help with your Ceph cluster? Contact us at https://croit.io
croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com