autoscale-status reports some of my PG_NUMs are way too big
I have 256 and need 32
POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET
RATIO PG_NUM NEW PG_NUM AUTOSCALE
rbd 1214G 3.0 56490G
0.0645 256 32 warn
If I try to decrease the pg_num I get:
# ceph osd pool set rbd pg_num 32
Error EPERM: nautilus OSDs are required to decrease pg_num
But all my osds are nautilus
ceph tell osd.* version
osd.0: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.1: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.2: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.3: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.4: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.5: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.6: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.7: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.8: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.9: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.10: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
osd.11: {
"version": "ceph version 14.2.0
(3a54b2b6d167d4a2a19e003a705696d4fe619afc) nautilus (stable)"
}
Should I let pg_num as they are now or there's a way to reduce them?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com