Re: Disk consume for CephFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I suggest trying the rsync --sparse option. Typically, qcow2 files (tend to be large) are sparse files. Without the sparse option, the files expand in their destination.


September 14, 2020 6:15 PM, fotofors@xxxxxxxxx wrote:

> Hello.
> 
> I'm using the Nautilus Ceph version for some huge folder with approximately 1.7TB of files.I
> created the filesystem and started to copy files via rsync. 
> 
> However, I've had to stop the process, because Ceph shows me that the new size of the folder is
> almost 6TB. I double checked the replicated size and it is 2. I double checked the rsync options
> and I didn't copy the files followed by symlinks.
> 
> How would it be possible to explain the extreme difference between the size of the original folder
> and CephFS?
> _______________________________________________
> ceph-users mailing list -- ceph-users@xxxxxxx
> To unsubscribe send an email to ceph-users-leave@xxxxxxx
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux