Re: Need urgent help for ceph health error issue

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tors 9 dec. 2021 kl 03:12 skrev Md. Hejbul Tawhid MUNNA
<munnaeebd@xxxxxxxxx>:
>
> Hi,
>
> Yes, we have added new osd. Previously we had only one type disk, hdd. now
> we have added ssd disk separate them with replicated_rule and device class
>
> ID CLASS WEIGHT  REWEIGHT SIZE    USE     AVAIL   %USE  VAR  PGS
>  0   hdd 5.57100  1.00000 5.6 TiB 1.8 TiB 3.8 TiB 31.61 1.04  850
>  1   hdd 5.57100  1.00000 5.6 TiB 1.6 TiB 4.0 TiB 29.07 0.96  830
>  2   hdd 5.57100  1.00000 5.6 TiB 1.6 TiB 4.0 TiB 27.98 0.92  820
>  3   hdd 5.57100  1.00000 5.6 TiB 1.3 TiB 4.2 TiB 23.74 0.78  696

Apart from having way too many PGs per OSD, it seems like the toofull
warning/error is bogus, all seem to be around 30-40% used so there is
no lack of space as far as this list goes. Could it be that you tried
to move a pool over to ssd and only have two but the pool wants three
or more ssds for failure domain rules?


-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux