Re: Changing CRUSH map ids

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 2, 2015 at 7:42 AM, Loris Cuoghi <lc@xxxxxxxxxxxxxxxxx> wrote:
> Thanks Greg :)
>
> For the OSDs, I understand, on the other hand for intermediate abstractions
> like hosts, racks and rooms, do you agree that it should currently be
> possible to change the IDs (always under the "one change at a time, I
> promise mom" rule)?

Yeah, that should be fine. Actually changing a bunch of IDs shouldn't
matter, but I haven't played with actually changing them so no
promises.

>
> Clearly, a good amount of shuffling should be expected as a consequence.
> Basically I was inquiring whether changing the id of a single host would
> shuffle the entirety (or a relatively big chunk) of the cluster data, or if
> the shuffling was limited to a direct proportion of the item's weight.
>
> I just --test-ed with crushtool. I changed an host's id, and testing the two
> maps with :
>
> crushtool -i crush.map --test --show-statistics --rule 0 --num-rep 3 --min-x
> 1 --max-x $N --show-mappings
>
> (with $N varying from as little as 32 to "big numbers"TM) shows that nearly
> the 50% of the mappings changed, in a 10 hosts cluster.
>
> Thanks All :)
>
>
> Le 02/11/2015 16:14, Gregory Farnum a écrit :
>>
>> Regardless of what the crush tool does, I wouldn't muck around with the
>> IDs of the OSDs. The rest of Celh will probably not handle it well if
>> the crush IDs don't match the OSD numbers.
>> -Greg
>>
>> On Monday, November 2, 2015, Loris Cuoghi <lc@xxxxxxxxxxxxxxxxx
>> <mailto:lc@xxxxxxxxxxxxxxxxx>> wrote:
>>
>>     Le 02/11/2015 12:47, Wido den Hollander a écrit :
>>
>>
>>
>>         On 02-11-15 12:30, Loris Cuoghi wrote:
>>
>>             Hi All,
>>
>>             We're currently on version 0.94.5 with three monitors and 75
>>             OSDs.
>>
>>             I've peeked at the decompiled CRUSH map, and I see that all
>>             ids are
>>             commented with '# Here be dragons!', or more literally : '#
>>             do not
>>             change unnecessarily'.
>>
>>             Now, what would happen if an incautious user would happen to
>>             put his
>>             chubby fingers on this ids, totally disregarding the warning
>>             at the
>>             entrance of the cave, and change one of them?
>>
>>             Data shuffle? (Relative to the allocation of PGs for the
>>             OSD/host/other
>>             item?)
>>
>>             A *big* data shuffle? (ALL data would need to have its
>> position
>>             recalculated, with immediate end-of-the-world data shuffle?)
>>
>>             Nothing at all? (And the big fat warning is there only to
>>             take fun on
>>             the uninstructed ones? Not plausible...)
>>
>>
>>         Give it a try! Download the CRUSHMap and run tests on it with
>>         crushtool:
>>
>>         $ crushtool -i mycrushmap --test --rule 0 --num-rep 3
>>         --show-statistics
>>
>>         Now, change the map, compile it and run again:
>>
>>         $ crushtool -i mycrushmap.new --test --rule 0 --num-rep 3
>>         --show-statistics
>>
>>         Check the differences and you get the idea of how much has
>> changed.
>>
>>         Wido
>>
>>
>>     Thanks Wido ! :)
>>
>>             Thanks !
>>
>>             Loris
>>             _______________________________________________
>>             ceph-users mailing list
>>             ceph-users@xxxxxxxxxxxxxx
>>             http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>         _______________________________________________
>>         ceph-users mailing list
>>         ceph-users@xxxxxxxxxxxxxx
>>         http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>>     _______________________________________________
>>     ceph-users mailing list
>>     ceph-users@xxxxxxxxxxxxxx
>>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux