Re: Cluster healthy, but 16.2.7 osd daemon upgrade says its unsafe to stop them?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/27/22 11:41, Bogdan Adrian Velica wrote:
Hi,

Can you please tell us the side of your ceph cluster? How man OSDs do you have?

16 OSDs.

$ ceph df
--- RAW STORAGE ---
CLASS     SIZE    AVAIL     USED  RAW USED  %RAW USED
hdd    8.9 TiB  8.3 TiB  595 GiB   595 GiB       6.55
ssd    7.6 TiB  7.0 TiB  664 GiB   664 GiB       8.49
TOTAL   17 TiB   15 TiB  1.2 TiB   1.2 TiB       7.45

--- POOLS ---
POOL                      ID  PGS   STORED  OBJECTS     USED %USED  MAX AVAIL
rbd                        0   64  4.4 GiB    1.11k  8.7 GiB 0.06    6.8 TiB
.rgw.root                  1   32   41 KiB       12  633 KiB 0    6.8 TiB
default.rgw.control        2   32   24 KiB        8   47 KiB 0    6.8 TiB
default.rgw.data.root      3   32   10 KiB        0   21 KiB 0    6.8 TiB
default.rgw.gc             4   32  1.3 MiB       32  4.7 MiB 0    6.8 TiB
default.rgw.log            5   32  5.5 MiB      179   11 MiB 0    6.8 TiB
default.rgw.users.uid      6   32  2.5 KiB        1   72 KiB 0    6.8 TiB
mathfs_data               12   32  140 GiB    1.06M  388 GiB 2.69    6.0 TiB
mathfs_metadata           13   32  598 MiB   75.75k  1.8 GiB 0.01    4.6 TiB
default.rgw.lc            15   32  245 KiB       32  491 KiB 0    6.8 TiB
libvirt                   21   32  172 GiB   44.47k  491 GiB 3.38    4.6 TiB
monthly_archive_metadata  36   32  426 MiB   20.66k  853 MiB 0    6.8 TiB
monthly_archive_data      37   32   39 GiB  263.23k   93 GiB 0.66    6.8 TiB
device_health_metrics     38    1   84 MiB       22  168 MiB 0    6.8 TiB
lensfun_metadata          41   32  246 MiB      544  493 MiB 0    6.8 TiB
lensfun_data              42   32  131 GiB   37.65k  263 GiB 1.84    6.8 TiB
default.rgw.users.keys    43   32     13 B        1  128 KiB 0    6.8 TiB


The default recommendations are to have a min_size of 2 and replica 3 per replicated pool.

Thanks. I don't recall creating any of the default.* pools, so they might have created by ceph-deploy, years ago (kraken?). They all have min_size 1, replica 2.

--
Sarunas Burdulis
Dartmouth Mathematics
math.dartmouth.edu/~sarunas

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux