Re: CRUSH rule device classes mystery

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



What's the output of "ceph -s" and "ceph osd tree"?

On Fri, May 3, 2019 at 8:58 AM Stefan Kooman <stefan@xxxxxx> wrote:
>
> Hi List,
>
> I'm playing around with CRUSH rules and device classes and I'm puzzled
> if it's working correctly. Platform specifics: Ubuntu Bionic with Ceph 14.2.1
>
> I created two new device classes "cheaphdd" and "fasthdd". I made
> sure these device classes are applied to the right OSDs and that the
> (shadow) crush rule is correctly filtering the right classes for the
> OSDs (ceph osd crush tree --show-shadow).
>
> I then created two new crush rules:
>
> ceph osd crush rule create-replicated fastdisks default host fasthdd
> ceph osd crush rule create-replicated cheapdisks default host cheaphdd
>
> # rules
> rule replicated_rule {
>         id 0
>         type replicated
>         min_size 1
>         max_size 10
>         step take default
>         step chooseleaf firstn 0 type host
>         step emit
> }
> rule fastdisks {
>         id 1
>         type replicated
>         min_size 1
>         max_size 10
>         step take default class fasthdd
>         step chooseleaf firstn 0 type host
>         step emit
> }
> rule cheapdisks {
>         id 2
>         type replicated
>         min_size 1
>         max_size 10
>         step take default class cheaphdd
>         step chooseleaf firstn 0 type host
>         step emit
> }
>
> After that I put the cephfs_metadata on the fastdisks CRUSH rule:
>
> ceph osd pool set cephfs_metadata crush_rule fastdisks
>
> Some data is moved to new osds, but strange enough there is still data on PGs
> residing on OSDs in the "cheaphdd" class. I confirmed this with:
>
> ceph pg ls-by-pool cephfs_data
>
> Testing CRUSH rule nr. 1 gives me:
>
> crushtool -i /tmp/crush_raw --test --show-mappings --rule 1 --min-x 1 --max-x 4  --num-rep 3
> CRUSH rule 1 x 1 [0,3,6]
> CRUSH rule 1 x 2 [3,6,0]
> CRUSH rule 1 x 3 [0,6,3]
> CRUSH rule 1 x 4 [0,6,3]
>
> Which are indeed the OSDs in the fasthdd class.
>
> Why is not all data moved to OSDs 0,3,6, but still spread on OSDs on the
> cheaphhd class as well?
>
> Thanks,
>
> Stefan
>
>
> --
> | BIT BV  http://www.bit.nl/        Kamer van Koophandel 09090351
> | GPG: 0xD14839C6                   +31 318 648 688 / info@xxxxxx
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux