Disk consume for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

I'm using the Nautilus Ceph version for some huge folder with approximately 1.7TB of files.I created the filesystem and started to copy files via rsync. 

However, I've had to stop the process, because Ceph shows me that the new size of the folder is almost 6TB. I double checked the replicated size and it is 2. I double checked the rsync options and I didn't copy the files followed by symlinks.

How would it be possible to explain the extreme difference between the size of the original folder and CephFS?
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux