Re: CRUSH map utilization issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Op 3 augustus 2016 om 10:35 schreef Rob Reus <rreus@xxxxxxxxxx>:
> 
> 
> Hi Wido,
> 
> 
> This is indeed something I have tried, and confirmed to work, see the other CRUSH map link I have provided in my original email.
> 

Ah, double e-mails.

> 
> However, I was wondering if achieving that same goal, but with only 1 root, is possible/feasible.
> 

I have never tried it, but gets back to my original question: Why the rack in between and not add the hosts directly to the root?

You should add the rack when you want to set the failure domain to racks and thus replicate over multiple racks.

In your case you want the failure domain to be 'host', so I'd suggest to stick with that.

Custom crush types are supported, no problem, however, 'host' is such a widely used type in CRUSH that I wouldn't change that.

Wido

> 
> Thanks!
> 
> 
> ________________________________
> Van: Wido den Hollander <wido@xxxxxxxx>
> Verzonden: woensdag 3 augustus 2016 10:30
> Aan: Rob Reus; ceph-users@xxxxxxxxxxxxxx
> Onderwerp: Re:  CRUSH map utilization issue
> 
> 
> > Op 3 augustus 2016 om 10:08 schreef Rob Reus <rreus@xxxxxxxxxx>:
> >
> >
> > Hi all,
> >
> >
> > I built a CRUSH map, with the goal to distinguish between SSD and HDD storage machines using only 1 root. The map can be found here: http://pastebin.com/VQdB0CE9
> >
> >
> > The issue I am having is this:
> >
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization --rule 0 --num-rep 3
> > rule 0 (replicated_ruleset_ssd), x = 0..1023, numrep = 3..3
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 0:    84/1024
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 1:    437/1024
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 2:    438/1024
> > rule 0 (replicated_ruleset_ssd) num_rep 3 result size == 3:    65/1024
> >
> >
> > And then the same test using num-rep 46 (the lowest possible number that shows full utilization):
> >
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization --rule 0 --num-rep 46
> > rule 0 (replicated_ruleset_ssd), x = 0..1023, numrep = 46..46
> > rule 0 (replicated_ruleset_ssd) num_rep 46 result size == 3:    1024/1024
> >
> >
> > Full output of above commands can be found here http://pastebin.com/2mbBnmSM and here http://pastebin.com/ar6SAFnX
> >
> >
> > The fact that amount of num-rep seems to scale with how many OSDs I am using, leads me to believe I am doing something wrong.
> >
> >
> > When using 2 roots (1 dedicated to SSD and 1 to HDD), everything works perfectly (example: http://pastebin.com/Uthxesut).
> >
> >
> > Would love to know what I am missing.
> >
> 
> Can you tell me the reasoning behind adding the rack in between? Since there is only one rack, why add the machines there?
> 
> In your case I wouldn't add a new type either, but I would do this:
> 
> host machineA-ssd {
> 
> }
> 
> host machineB-ssd {
> 
> }
> 
> host machineA-hdd {
> 
> }
> 
> host machineB-hdd {
> 
> }
> 
> root ssd {
>     item machineA-sdd
>     item machineB-ssd
> }
> 
> root hdd {
>     item machineA-hdd
>     item machineB-hdd
> }
> 
> rule replicated_ruleset_ssd {
>     ruleset 0
>     type replicated
>     min_size 1
>     max_size 10
>     step take ssd
>     step chooseleaf firstn 0 type host
>     step emit
> }
> 
> rule replicated_ruleset_hdd {
>     ruleset 0
>     type replicated
>     min_size 1
>     max_size 10
>     step take hdd
>     step chooseleaf firstn 0 type host
>     step emit
> }
> 
> And try again :)
> 
> Wido
> 
> >
> > Thanks!
> >
> >
> > - Rob
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/Uthxesut>
> >
> > # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_t - Pastebin.com<http://pastebin.com/Uthxesut>
> > pastebin.com
> >
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/ar6SAFnX>
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization-all -- - Pastebin.com<http://pastebin.com/ar6SAFnX>
> > pastebin.com
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/2mbBnmSM>
> >
> > root@ceph2:~/crush_files# crushtool -i crushmap --test --show-utilization-all -- - Pastebin.com<http://pastebin.com/2mbBnmSM>
> > pastebin.com
> >
> >
> > [http://pastebin.com/i/facebook.png]<http://pastebin.com/VQdB0CE9>
> >
> > # begin crush map tunable choose_local_tries 0 tunable choose_local_fallback_t - Pastebin.com<http://pastebin.com/VQdB0CE9>
> > pastebin.com
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux