I guess it's not an issue in larger scenarios, but I hope there's some
feature to inform user that pool is not safe.
And what is the general rule? If k+m = #OSDs than do not use disks of
different size?
P.
W dniu 8.11.2022 o 15:25, Paweł Kowalski pisze:
Hi,
I've set up a minimal EC setup - 3 OSDs, k=2, m=1:
root@skarb:~# ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META
AVAIL %USE VAR PGS STATUS
[...]
9 low_hdd 2.72849 1.00000 2.7 TiB 632 GiB 631 GiB 121 KiB
1.6 GiB 2.1 TiB 22.62 0.67 32 up
10 low_hdd 1.81879 1.00000 1.8 TiB 632 GiB 631 GiB 121 KiB
1.6 GiB 1.2 TiB 33.94 1.01 32 up
11 low_hdd 1.81879 1.00000 1.8 TiB 632 GiB 631 GiB 121 KiB
1.6 GiB 1.2 TiB 33.94 1.01 32 up
[...]
root@skarb:~# ceph df
--- RAW STORAGE ---
CLASS SIZE AVAIL USED RAW USED %RAW USED
[...]
low_hdd 6.4 TiB 4.5 TiB 1.8 TiB 1.8 TiB 29.04
[...]
--- POOLS ---
POOL ID PGS STORED OBJECTS USED %USED
MAX AVAIL
[...]
ceph3_ec_low_k2_m1-data 20 32 1.2 TiB 325.96k 1.8 TiB
32.16 2.6 TiB
ceph3_ec_low_k2_m1-metadata 21 32 319 KiB 5 970 KiB
0 5.5 TiB
[...]
As you can see, the first OSD is larger (2,7TB) comparing to 2nd and 3rd.
The question is - is it possible to check (not calculate) safe
available storage space on this setup? Ceph df shows 4.5TB available,
but obviously the pool isn't ready for first OSD's failure.
And if I manage to calculate safe size, how to make this survive first
OSD failure? I guess it's not as simple as "just don't use more tha
xxx space"...
Regards,
Paweł
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx