Re: [solved] Changing CRUSH rule on a running cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Oliver,
can you post here on the mailing list the steps taken?
From the IRC logs you said "if I use "choose .... osd", it works -- but "chooseleaf ... host" doesn't work"
So, to have data balanced between 2 rooms, is the rule "step chooseleaf firstn 0 type room" correct?

Thanks

--
Marco


2013/3/8 Olivier Bonvalet <ceph.list@xxxxxxxxx>

>
> Thanks for your answer. So I made some tests on a dedicated spool, I was
> able to move data from «platter» to «SSD» very well, it's great.
>
> But I can't obtain that per "network" neither per "host" setup :
> with 2 hosts, each one with 2 OSD, and with a pool with use only 1
> replica (so, 2 copies), I tried this rule :
>
>         rule rbdperhost {
>               ruleset 5
>               type replicated
>               min_size 1
>               max_size 10
>               step take default
>               step chooseleaf firstn 0 type host
>               step emit
>         }
>
>
> As a result I obtain some PG which stuck in «active+remapped» state.
> When querying one of this PG, I see that CRUSH find only one OSD up for
> this one and can't find an other OSD to set replica.
>
> If I well understand, in this case the "chooseleaf firstn 0 type host"
> say to Ceph to choose 2 differents hosts, then in each of them choose
> one OSD. So with 2 hosts, it should works, no ?
>
> Thanks,
> Olivier B.
>
>

So, as said on IRC, it's solved. My rules were not working, and after
use of «tunables» it's ok.

I love that feature of changing data spreading on live !


_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux