Re: Autoscale recommendtion seems to small + it broke my pool...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Your pg_num is fine, there's no reason to change it if you don't encounter any issues. One could argue that your smaller OSDs have too few PGs but the larger OSDs have reasonable values. I would probably leave it as it is.

Regarding the inactive PGs, how are your pools configured? Can you share

ceph osd pool ls detail

It could be an issue with min_size (is it also set to 3?).


Zitat von Lindsay Mathieson <lindsay.mathieson@xxxxxxxxx>:

Nautilus 14.2.9, setup using Proxmox.

 * 5 Hosts
 * 18 OSDs with a mix of disk sizes (3TB, 1TB, 500GB), all bluestore
 * Pool size = 3, pg_num = 512

According to:

https://docs.ceph.com/docs/nautilus/rados/operations/placement-groups/#preselection

With 18 OSD's I should be using pg_num=1024, but I actually have it set to 512.


However autoscale is recommending pg_num=128

Additionally, I accidentally set autoscale to on, rather than warn, so it started the process. I rapidly got a "Reduced data availability: 2 pgs inactive" warning and io on the pool stopped. I cleared the warning by restarting the effected OSD's for the pg id, but then more cropped up. I only made it stop and restored access to the pool by turning off autoscale and setting pg_num back to 512.


Autoscale warning:

POOL   SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE ceph  3239G              3.0       31205G 0.3114               1.0    512        128 warn


osd tree

ID  CLASS WEIGHT   REWEIGHT SIZE    RAW USE DATA    OMAP META     AVAIL   %USE  VAR  PGS STATUS TYPE NAME  -1       30.47374        -  30 TiB 7.4 TiB 7.4 TiB  28 MiB   23 GiB  23 TiB 24.40 1.00   -        root default  -5        5.45798        - 5.5 TiB 1.4 TiB 1.3 TiB 4.4 MiB  3.4 GiB 4.1 TiB 24.78 1.02   -            host loc   1   hdd  2.72899  0.95001 2.7 TiB 679 GiB 677 GiB 3.1 MiB  1.9 GiB 2.1 TiB 24.31 1.00 136     up         osd.1   8   hdd  2.72899  1.00000 2.7 TiB 706 GiB 704 GiB 1.3 MiB  1.5 GiB 2.0 TiB 25.26 1.04 143     up         osd.8  -9        1.36449        - 1.4 TiB 431 GiB 429 GiB 1.4 MiB  2.1 GiB 966 GiB 30.84 1.26   -            host lod   5   hdd  0.90970  1.00000 932 GiB 291 GiB 290 GiB 1.1 MiB  1.1 GiB 641 GiB 31.20 1.28  59     up         osd.5  12   hdd  0.45479  1.00000 466 GiB 140 GiB 139 GiB 293 KiB 1024 MiB 325 GiB 30.12 1.23  28     up         osd.12 -11       10.91595        -  11 TiB 2.4 TiB 2.4 TiB 6.9 MiB  5.8 GiB 8.5 TiB 21.94 0.90   -            host vnb   6   hdd  2.72899  1.00000 2.7 TiB 613 GiB 612 GiB 2.5 MiB  1.5 GiB 2.1 TiB 21.95 0.90 124     up         osd.6   7   hdd  2.72899  1.00000 2.7 TiB 614 GiB 613 GiB 1.7 MiB  1.4 GiB 2.1 TiB 21.98 0.90 124     up         osd.7   9   hdd  2.72899  1.00000 2.7 TiB 617 GiB 615 GiB 1.5 MiB  1.4 GiB 2.1 TiB 22.06 0.90 124     up         osd.9  17   hdd  2.72899  1.00000 2.7 TiB 608 GiB 607 GiB 1.3 MiB  1.5 GiB 2.1 TiB 21.76 0.89 124     up         osd.17  -3        4.54836        - 4.5 TiB 1.3 TiB 1.3 TiB  11 MiB  7.0 GiB 3.3 TiB 27.76 1.14   -            host vnh   0   hdd  0.90970  0.95001 932 GiB 220 GiB 219 GiB 3.0 MiB 1021 MiB 711 GiB 23.64 0.97  44     up         osd.0   2   hdd  0.90970  0.95001 932 GiB 252 GiB 251 GiB 536 KiB 1023 MiB 679 GiB 27.06 1.11  51     up         osd.2  10   hdd  0.54579  1.00000 559 GiB 158 GiB 157 GiB 1.6 MiB 1022 MiB 401 GiB 28.29 1.16  32     up         osd.10  11   hdd  0.54579  1.00000 559 GiB 157 GiB 156 GiB 332 KiB 1024 MiB 402 GiB 28.10 1.15  32     up         osd.11  14   hdd  0.54579  1.00000 559 GiB 187 GiB 186 GiB 1.1 MiB  1.0 GiB 372 GiB 33.45 1.37  38     up         osd.14  15   hdd  0.54579  1.00000 559 GiB 159 GiB 158 GiB 2.0 MiB 1022 MiB 400 GiB 28.51 1.17  32     up         osd.15  16   hdd  0.54579  1.00000 559 GiB 159 GiB 158 GiB 2.8 MiB 1021 MiB 400 GiB 28.46 1.17  32     up         osd.16  -7        8.18697        - 8.2 TiB 2.0 TiB 2.0 TiB 4.6 MiB  4.5 GiB 6.2 TiB 24.50 1.00   -            host vni   3   hdd  2.72899  1.00000 2.7 TiB 670 GiB 669 GiB 1.2 MiB  1.4 GiB 2.1 TiB 23.99 0.98 134     up         osd.3   4   hdd  2.72899  1.00000 2.7 TiB 681 GiB 679 GiB 1.6 MiB  1.6 GiB 2.1 TiB 24.36 1.00 136     up         osd.4  13   hdd  2.72899  1.00000 2.7 TiB 703 GiB 701 GiB 1.8 MiB  1.5 GiB 2.0 TiB 25.14 1.03 143     up         osd.13                       TOTAL  30 TiB 7.4 TiB 7.4 TiB  28 MiB   23 GiB  23 TiB 24.40

Should I be reducing the pg_num? is there a way to do it safely?


Thanks.

--
Lindsay

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx


_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux