Re: How to properly deal with NEAR FULL OSD

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Le 19/02/2016 17:17, Don Laursen a écrit :

Thanks. To summarize

Your data, images+volumes = 27.15% space used

Raw used = 81.71% used

 

This is a big difference that I can’t account for? Can anyone? So is your cluster actually full?


I believe this is the pool size being accounted for and it is harmless: 3 x 27.15 = 81.45 which is awfully close to 81.71.
We have the same behavior on our Ceph cluster.

 

I had the same problem with my small cluster. Raw used was about 85% and actual data, with replication, was about 30%. My OSDs were also BRTFS. BRTFS was causing its own problems. I fixed my problem by removing each OSD one at a time and re-adding as the default XFS filesystem. Doing so brought the percentages used to be about the same and it’s good now.


That's odd : AFAIK we had the same behaviour with XFS before migrating to BTRFS.

Best regards,

Lionel
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux