Re: PG Balancer Upmap mode not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/7/19 11:42 AM, Philippe D'Anjou wrote:
> Hi,
> the docs say the upmap mode is trying to achieve perfect distribution as
> to have equal amount of PGs/OSD.
> This is what I got(v14.2.4):
> 
>   0   ssd 3.49219  1.00000 3.5 TiB 794 GiB 753 GiB  38 GiB 3.4 GiB 2.7
> TiB 22.20 0.32  82     up
>   1   ssd 3.49219  1.00000 3.5 TiB 800 GiB 751 GiB  45 GiB 3.7 GiB 2.7
> TiB 22.37 0.33  84     up
>   2   ssd 3.49219  1.00000 3.5 TiB 846 GiB 792 GiB  50 GiB 3.6 GiB 2.7
> TiB 23.66 0.35  88     up
>   3   ssd 3.49219  1.00000 3.5 TiB 812 GiB 776 GiB  33 GiB 3.3 GiB 2.7
> TiB 22.71 0.33  85     up
>   4   ssd 3.49219  1.00000 3.5 TiB 768 GiB 730 GiB  34 GiB 4.1 GiB 2.7
> TiB 21.47 0.31  83     up
>   6   ssd 3.49219  1.00000 3.5 TiB 765 GiB 731 GiB  31 GiB 3.3 GiB 2.7
> TiB 21.40 0.31  82     up
>   8   ssd 3.49219  1.00000 3.5 TiB 872 GiB 828 GiB  41 GiB 3.2 GiB 2.6
> TiB 24.40 0.36  85     up
>  10   ssd 3.49219  1.00000 3.5 TiB 789 GiB 743 GiB  42 GiB 3.3 GiB 2.7
> TiB 22.05 0.32  82     up
>   5   ssd 3.49219  1.00000 3.5 TiB 719 GiB 683 GiB  32 GiB 3.9 GiB 2.8
> TiB 20.12 0.29  78     up
>   7   ssd 3.49219  1.00000 3.5 TiB 741 GiB 698 GiB  39 GiB 3.8 GiB 2.8
> TiB 20.73 0.30  79     up
>   9   ssd 3.49219  1.00000 3.5 TiB 709 GiB 664 GiB  41 GiB 3.5 GiB 2.8
> TiB 19.82 0.29  78     up
>  11   ssd 3.49219  1.00000 3.5 TiB 858 GiB 834 GiB  22 GiB 2.4 GiB 2.7
> TiB 23.99 0.35  82     up
> 101   ssd 3.49219  1.00000 3.5 TiB 815 GiB 774 GiB  38 GiB 3.5 GiB 2.7
> TiB 22.80 0.33  80     up
> 103   ssd 3.49219  1.00000 3.5 TiB 827 GiB 783 GiB  40 GiB 3.3 GiB 2.7
> TiB 23.11 0.34  81     up
> 105   ssd 3.49219  1.00000 3.5 TiB 797 GiB 759 GiB  36 GiB 2.5 GiB 2.7
> TiB 22.30 0.33  81     up
> 107   ssd 3.49219  1.00000 3.5 TiB 840 GiB 788 GiB  50 GiB 2.8 GiB 2.7
> TiB 23.50 0.34  83     up
> 100   ssd 3.49219  1.00000 3.5 TiB 728 GiB 678 GiB  47 GiB 2.4 GiB 2.8
> TiB 20.36 0.30  78     up
> 102   ssd 3.49219  1.00000 3.5 TiB 764 GiB 750 GiB  12 GiB 2.2 GiB 2.7
> TiB 21.37 0.31  76     up
> 104   ssd 3.49219  1.00000 3.5 TiB 795 GiB 761 GiB  31 GiB 2.5 GiB 2.7
> TiB 22.22 0.33  78     up
> 106   ssd 3.49219  1.00000 3.5 TiB 730 GiB 665 GiB  62 GiB 2.8 GiB 2.8
> TiB 20.41 0.30  78     up
> 108   ssd 3.49219  1.00000 3.5 TiB 849 GiB 808 GiB  38 GiB 2.5 GiB 2.7
> TiB 23.73 0.35  92     up
> 109   ssd 3.49219  1.00000 3.5 TiB 798 GiB 754 GiB  41 GiB 2.7 GiB 2.7
> TiB 22.30 0.33  83     up
> 110   ssd 3.49219  1.00000 3.5 TiB 840 GiB 810 GiB  28 GiB 2.4 GiB 2.7
> TiB 23.49 0.34  85     up
> 111   ssd 3.49219  1.00000 3.5 TiB 788 GiB 741 GiB  45 GiB 2.5 GiB 2.7
> TiB 22.04 0.32  85     up
> 
> PG's are badly distributed.

>From what information do you draw that conclusion? You use about 22% on
all OSDs.

I suggest that you increase your PGs to at least 100 per OSD, that will
make distribution even better.

Wido

> ceph balancer status
> {
>     "active": true,
>     "plans": [],
>     "mode": "upmap"
> }
> 
> It is because of this?
>     health: HEALTH_WARN
>             Failed to send data to Zabbix
>             1 subtrees have overcommitted pool target_size_bytes
>             1 subtrees have overcommitted pool target_size_ratio
> 
> 
> Any ideas why its not working?
> 
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux