Understanding the total space in CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear Ceph users,

I'd need some help in understanding the total space in a CephFS. My cluster is currently built of 8 machines, the one with the smallest capacity has 8 TB of total disk space, and the total available raw space is 153 TB. I set up a 3x replicated metadata pool and a 6+2 erasure coded data pool with host failure domain for my CephFS. In this configuration every host holds a data chunk, so I would expect a total of about 48 TB of total storage space. I computed this value by noting that (roughly speaking and neglecting the metadata) 48 TB of data will need 48 TB of data chunks and 16 TB of coding chunks, for a total of 64 TB that evenly divided into my 8 machines gives an occupancy of 8 TB per host, which exactly saturates the smallest one.

Assuming that the above is correct then I would expect that a df -h on a machine mounting the CephFS would report 48 TB of total space. Instead it started with something around 75 TB at the beginning, and it's slowly decreasing while I'm transferring data to the CephFS, being now at 62 TB.

I cannot understand this behavior, nor if my assumptions about the total space are correct, so I'd need some help with this.
Thanks,

Nicola
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux