Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



You also might want to increase mon_max_pg_per_osd since you have a wide spread of OSD sizes.

Default is 250.  Set it to 1000.

> On Feb 24, 2024, at 10:30 AM, Anthony D'Atri <anthony.datri@xxxxxxxxx> wrote:
> 
> Add a 10tb HDD to the third node as I suggested, that will help your cluster.
> 
> 
>> On Feb 24, 2024, at 10:29 AM, nguyenvandiep@xxxxxxxxxxxxxx wrote:
>> 
>> I will correct some small things:
>> 
>> we have 6 nodes, 3 osd node and 3 gaeway node ( which run RGW, mds and nfs service)
>> you r corrct, 2/3 osd node have ONE-NEW 10tib disk
>> 
>> About your suggestion, add another osd host, we will. But we need to end this nightmare, my NFS folder which have 10tib data is down :(
>> 
>> My ratio
>> ceph osd dump | grep ratio
>> full_ratio 0.95
>> backfillfull_ratio 0.92
>> nearfull_ratio 0.85
>> _______________________________________________
>> ceph-users mailing list -- ceph-users@xxxxxxx
>> To unsubscribe send an email to ceph-users-leave@xxxxxxx
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux