Re: Balancing PGs across OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Konstantin,

thanks for your suggestions.

> Lars, you have too much PG's for this OSD's. I suggest to disable PG 
> autoscaler and:
> 
> - reduce number of PG's for cephfs_metada pool to something like 16 PG's.

Done.

> 
> - reduce number of PG's for cephfs_data to something like 512.

Done.

> - update crush rule for cephfs_metadata pool - set domain to 'rack' 
> instead 'host'.

Done.


> Also please paste your `ceph osd tree`.
> 

$ ceph osd tree
ID  CLASS WEIGHT    TYPE NAME                   STATUS REWEIGHT PRI-AFF 
 -1       195.40730 root default                                        
-25       195.40730     room PRZ                                        
-26       195.40730         row rechts                                  
-27        83.74599             rack 1-eins                             
 -3        27.91533                 host onode1                         
  0   hdd   5.51459                     osd.0       up  1.00000 1.00000 
  1   hdd   5.51459                     osd.1       up  1.00000 1.00000 
  2   hdd   5.51459                     osd.2       up  1.00000 1.00000 
  3   hdd   5.51459                     osd.3       up  1.00000 1.00000 
 37   hdd   5.51459                     osd.37      up  1.00000 1.00000 
  4   ssd   0.34239                     osd.4       up  1.00000 1.00000 
-13        27.91533                 host onode4                         
 13   hdd   5.51459                     osd.13      up  1.00000 1.00000 
 14   hdd   5.51459                     osd.14      up  1.00000 1.00000 
 15   hdd   5.51459                     osd.15      up  1.00000 1.00000 
 16   hdd   5.51459                     osd.16      up  1.00000 1.00000 
 40   hdd   5.51459                     osd.40      up  1.00000 1.00000 
 33   ssd   0.34239                     osd.33      up  1.00000 1.00000 
-22        27.91533                 host onode7                         
 25   hdd   5.51459                     osd.25      up  1.00000 1.00000 
 26   hdd   5.51459                     osd.26      up  1.00000 1.00000 
 27   hdd   5.51459                     osd.27      up  1.00000 1.00000 
 28   hdd   5.51459                     osd.28      up  1.00000 1.00000 
 30   hdd   5.51459                     osd.30      up  1.00000 1.00000 
 36   ssd   0.34239                     osd.36      up  1.00000 1.00000 
-28        55.83066             rack 2-zwei                             
 -7        27.91533                 host onode2                         
  5   hdd   5.51459                     osd.5       up  1.00000 1.00000 
  6   hdd   5.51459                     osd.6       up  1.00000 1.00000 
  7   hdd   5.51459                     osd.7       up  1.00000 1.00000 
  8   hdd   5.51459                     osd.8       up  1.00000 1.00000 
 38   hdd   5.51459                     osd.38      up  1.00000 1.00000 
 31   ssd   0.34239                     osd.31      up  1.00000 1.00000 
-16        27.91533                 host onode5                         
 17   hdd   5.51459                     osd.17      up  1.00000 1.00000 
 18   hdd   5.51459                     osd.18      up  1.00000 1.00000 
 19   hdd   5.51459                     osd.19      up  1.00000 1.00000 
 20   hdd   5.51459                     osd.20      up  1.00000 1.00000 
 41   hdd   5.51459                     osd.41      up  1.00000 1.00000 
 34   ssd   0.34239                     osd.34      up  1.00000 1.00000 
-29        55.83066             rack 3-drei                             
-10        27.91533                 host onode3                         
  9   hdd   5.51459                     osd.9       up  1.00000 1.00000 
 10   hdd   5.51459                     osd.10      up  1.00000 1.00000 
 11   hdd   5.51459                     osd.11      up  1.00000 1.00000 
 12   hdd   5.51459                     osd.12      up  1.00000 1.00000 
 39   hdd   5.51459                     osd.39      up  1.00000 1.00000 
 32   ssd   0.34239                     osd.32      up  1.00000 1.00000 
-19        27.91533                 host onode6                         
 21   hdd   5.51459                     osd.21      up  1.00000 1.00000 
 22   hdd   5.51459                     osd.22      up  1.00000 1.00000 
 23   hdd   5.51459                     osd.23      up  1.00000 1.00000 
 24   hdd   5.51459                     osd.24      up  1.00000 1.00000 
 29   hdd   5.51459                     osd.29      up  1.00000 1.00000 
 35   ssd   0.34239                     osd.35      up  1.00000 1.00000 

So I just wait for the remapping and merging being done and see what happens.
Thanks so far!

Best regards,
Lars
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux