Upmap balancing - pools grouped together?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been trying the upmap balancer on a new Nautilus cluster.  We three main pools, a triple replicated pool (id:1) and two 6+3 erasure coded pools (id: 4 and 5).  The balancer does a very nice job on the triple replicated pool, but does something strange on the EC pools.  Here is a sample of the PG counts on OSDs (each line) per pool (columns):

OSD    1   2   3   4   5    ALL

   0  25   -   -  80  34    139
   1  26   -   -  74  42    142
   2  25   -   -  74  42    141
   3  26   -   -  75  41    142
   4  25   -   -  83  31    139
   5  26   -   -  80  35    141
   6  26   -   -  79  36    141
   7  26   -   -  72  44    142
   8  25   -   -  74  42    141
   9  26   -   -  78  38    142
  10  26   -   -  70  46    142
  11  25   -   -  78  36    139
  12  26   -   -  74  41    141
  13  26   -   -  78  37    141
  14  26   -   -  76  40    142
  15  25   -   -  82  33    140
  16  26   -   -  77  37    140
  17  26   -   -  73  43    142
  18  26   -   -  77  37    140
  19  26   -   -  79  35    140
  20  26   -   -  74  42    142
  21  26   -   -  78  36    140
  22  26   -   -  79  35    140
  23  26   -   -  81  34    141
  24  26   -   -  77  38    141
  25  26   -   -  79  35    140

Pool #1 has 25 or 26 PGs on each OSD extremely consistently.  But it looks like pool #4 and #5 have a lot of variance, but their *sum* seems to be well balanced (and hence the total PGs per OSD is also very consistent).  As if the balancer was balancing the two pools together as one.  Both pools are EC with 6+3 profile, and use the same crush rule.  Could that be the reason?  If not - any ideas?

Also - a separate balancer related question: is there a way to have the balancer balace the size on each OSD per pool as opposed to the number of PGs?  If not currently - is it something hard to implement?

Andras

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux