Re: CRUSH map utilization issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,


> I have never tried it, but gets back to my original question: Why the rack in between and not add the hosts directly to the root?
>
> You should add the rack when you want to set the failure domain to racks and thus replicate over multiple racks.
>
> In your case you want the failure domain to be 'host', so I'd suggest to stick with that.


Currently I am building a Ceph test cluster, which will eventually have racks, or maybe even rooms as failure domain. Because of that, I have also included the rack bucket.


> I prefer KISS and thus separate roots, but I think this is what you're
> after:

http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map


I have used that article as an example, but he is running into the same issue as I am. Seems like he also did not find a solution. It may very well be that what I am trying to do, is just not possible. I was just hoping that maybe someone could explain why I am seeing what I'm seeing.


Thanks.

cephnotes.ksperis.com
It is not always easy to know how to organize your data in the Crushmap, especially when trying to distribute the data geographically while separating different types ...


Van: Christian Balzer <chibi@xxxxxxx>
Verzonden: woensdag 3 augustus 2016 10:45:57
Aan: ceph-users@xxxxxxxxxxxxxx
CC: Rob Reus
Onderwerp: Re: CRUSH map utilization issue
 

Hello,

On Wed, 3 Aug 2016 08:35:49 +0000 Rob Reus wrote:

> Hi Wido,
>
>
> This is indeed something I have tried, and confirmed to work, see the other CRUSH map link I have provided in my original email.
>
>
> However, I was wondering if achieving that same goal, but with only 1 root, is possible/feasible.
>
I prefer KISS and thus separate roots, but I think this is what you're
after:
http://cephnotes.ksperis.com/blog/2015/02/02/crushmap-example-of-a-hierarchical-cluster-map

Christian

>
> Thanks!
>
>
> ________________________________
> Van: Wido den Hollander <wido@xxxxxxxx>
> Verzonden: woensdag 3 augustus 2016 10:30
> Aan: Rob Reus; ceph-users@xxxxxxxxxxxxxx
> Onderwerp: Re: CRUSH map utilization issue
>
>
> > Op 3 augustus 2016 om 10:08 schreef Rob Reus <rreus@xxxxxxxxxx>:
> >
> >
> > Hi all,
> >
> >
> > I built a CRUSH map, with the goal to distinguish between SSD and HDD storage machines using only 1 root. The map can be found here: http://pastebin.com/VQdB0CE9
> >
> >
> > The issue I am having is this:
> >
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization --rule 0 --num-rep 3
> > rule 0 (replicated_ruleset_ssd), x = 0..1023, numrep = 3..3
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 0:    84/1024
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 1:    437/1024
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 2:    438/1024
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 3:    65/1024
> >
> >
> > And then the same test using num-rep 46 (the lowest possible number that shows full utilization):
> >
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization --rule 0 --num-rep 46
> > rule 0 (replicated_ruleset_ssd), x = 0..1023, numrep = 46..46
> > rule 0 (replicated_ruleset_ssd) num_rep 46 result size == 3:    1024/1024
> >
> >
> > Full output of above commands can be found here http://pastebin.com/2mbBnmSM and here http://pastebin.com/ar6SAFnX
> >
> >
> > The fact that amount of num-rep seems to scale with how many OSDs I am using, leads me to believe I am doing something wrong.
> >
> >
> > When using 2 roots (1 dedicated to SSD and 1 to HDD), everything works perfectly (example: http://pastebin.com/Uthxesut).
> >
> >
> > Would love to know what I am missing.
> >
>
> Can you tell me the reasoning behind adding the rack in between? Since there is only one rack, why add the machines there?
>
> In your case I wouldn't add a new type either, but I would do this:
>
> host machineA-ssd {
>
> }
>
> host machineB-ssd {
>
> }
>
> host machineA-hdd {
>
> }
>
> host machineB-hdd {
>
> }
>
> root ssd {
>     item machineA-sdd
>     item machineB-ssd
> }
>
> root hdd {
>     item machineA-hdd
>     item machineB-hdd
> }
>
> rule replicated_ruleset_ssd {
>     ruleset 0
>     type replicated
>     min_size 1
>     max_size 10
>     step take ssd
>     step chooseleaf firstn 0 type host
>     step emit
> }
>
> rule replicated_ruleset_hdd {
>     ruleset 0
>     type replicated
>     min_size 1
>     max_size 10
>     step take hdd
>     step chooseleaf firstn 0 type host
>     step emit
> }
>
> And try again :)
>
> Wido
>
> >
> > Thanks!
> >
> >
> > - Rob
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/Uthxesut>
> >
> > # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_t - Pastebin.com<http://pastebin.com/Uthxesut>
> > pastebin.com
> >
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/ar6SAFnX>
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization-all -- - Pastebin.com<http://pastebin.com/ar6SAFnX>
> > pastebin.com
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/2mbBnmSM>
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization-all -- - Pastebin.com<http://pastebin.com/2mbBnmSM>
> > pastebin.com
> >
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/VQdB0CE9>
> >
> > # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_t - Pastebin.com<http://pastebin.com/VQdB0CE9>
> > pastebin.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


--
Christian Balzer        Network/Systems Engineer               
chibi@xxxxxxx    Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux