Is autoscale working with ec pool?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

In our cluster we only the data pool is on ec 4:2, the others are on replica 3.

--- RAW STORAGE ---
CLASS  SIZE     AVAIL    USED     RAW USED  %RAW USED
nvme    10 TiB   10 TiB  121 MiB   408 GiB       3.80
ssd    524 TiB  440 TiB   84 TiB    84 TiB      16.01
TOTAL  534 TiB  450 TiB   84 TiB    84 TiB      15.77

--- POOLS ---
POOL                    ID  PGS  STORED   OBJECTS  USED     %USED  MAX AVAIL
device_health_metrics    1    1   11 MiB       55   32 MiB      0    124 TiB
.rgw.root                2   32  1.0 MiB      178  4.5 MiB      0    124 TiB
hkg.rgw.log             22   32   22 GiB   56.35k   65 GiB   0.02    124 TiB
hkg.rgw.control         23   32  2.6 KiB        8  7.8 KiB      0    124 TiB
hkg.rgw.meta            24    8  322 KiB    1.08k   13 MiB      0    3.2 TiB
hkg.rgw.otp             25   32      0 B        0      0 B      0    124 TiB
hkg.rgw.buckets.index   26  128  134 GiB   58.57k  403 GiB   3.99    3.2 TiB
hkg.rgw.buckets.non-ec  27   32   18 MiB  189.31k  2.2 GiB      0    124 TiB
hkg.rgw.buckets.data    28   32   42 TiB  300.06M   63 TiB  14.50    247 TiB

As you can see the data pool pg number is 32. If I calculate like max pg for the data pool with 36 OSD should be 512 or let's say 256 because other pools have replica 3, 524/256=2PG/TiB so if stored 84TiB it should have already warned to increase the pg in that pool isn't it? Or EC autoscale doesn't work?

Thank you
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux