Re: PG Balancer Upmap mode not working

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 12/7/19 3:39 PM, Philippe D'Anjou wrote:
> @Wido Den Hollander 
> 
> First of all the docs say: "In most cases, this distribution is
> “perfect,” which an equal number of PGs on each OSD (+/-1 PG, since they
> might not divide evenly)."
> Either this is just false information or very badly stated.

Might be both. But what are you trying to achieve? PGs will never be
equally sized because objects vary in size.

The end result, I assume, is that you have equally filled OSDs.

> 
> I increased PGs and see no difference.
> 
> I pointed out MULTIPLE times that Nautilus has major flaws in the data
> distribution but nobody seems to listen to me. Not sure how much more
> evidence I have to show.
> 

What has changed? Because this can only change if Nautilus had a CRUSH
algorithm change which it didn't. Upgrading from Mimic nor Luminous
causes a major shift in data.

>   0   ssd 3.49219  1.00000 3.5 TiB 715 GiB 674 GiB  37 GiB 3.9 GiB 2.8
> TiB 19.99 0.29 147     up
>   1   ssd 3.49219  1.00000 3.5 TiB 724 GiB 672 GiB  49 GiB 3.8 GiB 2.8
> TiB 20.25 0.30 146     up
>   2   ssd 3.49219  1.00000 3.5 TiB 736 GiB 681 GiB  50 GiB 4.4 GiB 2.8
> TiB 20.57 0.30 150     up
>   3   ssd 3.49219  1.00000 3.5 TiB 712 GiB 676 GiB  33 GiB 3.5 GiB 2.8
> TiB 19.92 0.29 146     up
>   4   ssd 3.49219  1.00000 3.5 TiB 752 GiB 714 GiB  34 GiB 4.6 GiB 2.8
> TiB 21.03 0.31 156     up
>   6   ssd 3.49219  1.00000 3.5 TiB 710 GiB 671 GiB  35 GiB 3.8 GiB 2.8
> TiB 19.85 0.29 146     up
>   8   ssd 3.49219  1.00000 3.5 TiB 781 GiB 738 GiB  40 GiB 3.7 GiB 2.7
> TiB 21.85 0.32 156     up
>  10   ssd 3.49219  1.00000 3.5 TiB 728 GiB 682 GiB  42 GiB 4.0 GiB 2.8
> TiB 20.35 0.30 146     up
>   5   ssd 3.49219  1.00000 3.5 TiB 664 GiB 628 GiB  32 GiB 4.3 GiB 2.8
> TiB 18.58 0.27 141     up
>   7   ssd 3.49219  1.00000 3.5 TiB 656 GiB 613 GiB  39 GiB 4.0 GiB 2.9
> TiB 18.35 0.27 136     up
>   9   ssd 3.49219  1.00000 3.5 TiB 632 GiB 586 GiB  41 GiB 4.4 GiB 2.9
> TiB 17.67 0.26 131     up
>  11   ssd 3.49219  1.00000 3.5 TiB 725 GiB 701 GiB  22 GiB 2.6 GiB 2.8
> TiB 20.28 0.30 138     up
> 101   ssd 3.49219  1.00000 3.5 TiB 755 GiB 713 GiB  38 GiB 3.9 GiB 2.8
> TiB 21.11 0.31 146     up
> 103   ssd 3.49219  1.00000 3.5 TiB 761 GiB 718 GiB  40 GiB 3.6 GiB 2.7
> TiB 21.29 0.31 150     up
> 105   ssd 3.49219  1.00000 3.5 TiB 715 GiB 676 GiB  36 GiB 2.6 GiB 2.8
> TiB 19.99 0.29 148     up
> 107   ssd 3.49219  1.00000 3.5 TiB 760 GiB 706 GiB  50 GiB 3.2 GiB 2.8
> TiB 21.24 0.31 147     up
> 100   ssd 3.49219  1.00000 3.5 TiB 724 GiB 674 GiB  47 GiB 2.5 GiB 2.8
> TiB 20.25 0.30 144     up
> 102   ssd 3.49219  1.00000 3.5 TiB 669 GiB 654 GiB  12 GiB 2.3 GiB 2.8
> TiB 18.71 0.27 141     up
> 104   ssd 3.49219  1.00000 3.5 TiB 721 GiB 687 GiB  31 GiB 3.0 GiB 2.8
> TiB 20.16 0.30 144     up
> 106   ssd 3.49219  1.00000 3.5 TiB 715 GiB 646 GiB  65 GiB 3.8 GiB 2.8
> TiB 19.99 0.29 143     up
> 108   ssd 3.49219  1.00000 3.5 TiB 729 GiB 691 GiB  36 GiB 2.6 GiB 2.8
> TiB 20.38 0.30 156     up
> 109   ssd 3.49219  1.00000 3.5 TiB 732 GiB 684 GiB  45 GiB 3.0 GiB 2.8
> TiB 20.47 0.30 146     up
> 110   ssd 3.49219  1.00000 3.5 TiB 773 GiB 743 GiB  28 GiB 2.7 GiB 2.7
> TiB 21.63 0.32 154     up
> 111   ssd 3.49219  1.00000 3.5 TiB 708 GiB 660 GiB  45 GiB 2.7 GiB 2.8
> TiB 19.78 0.29 146     up
> 
> The % fillrate is no different than before, fluctuates hard.

All OSDs are very close to 20%, that's very good.

What is the real problem here?

Wido
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux