Re: [Urgent] Ceph system Down, Ceph FS volume in recovering

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I will correct some small things:

we have 6 nodes, 3 osd node and 3 gaeway node ( which run RGW, mds and nfs service)
you r corrct, 2/3 osd node have ONE-NEW 10tib disk

About your suggestion, add another osd host, we will. But we need to end this nightmare, my NFS folder which have 10tib data is down :(

My ratio
ceph osd dump | grep ratio
full_ratio 0.95
backfillfull_ratio 0.92
nearfull_ratio 0.85
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux