cephfs file size limit 0f 1.1TB?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear All,

I've been testing out cephfs, and bumped into what appears to be an upper
file size limit of ~1.1TB

e.g:

[root@cephfs1 ~]# time rsync --progress -av /ssd/isilon_melis.tar
/ceph/isilon_melis.tar
sending incremental file list
isilon_melis.tar
1099341824000  54%  237.51MB/s    1:02:05
rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
Broken pipe (32)
rsync: write failed on "/ceph/isilon_melis.tar": File too large (27)
rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.9]
rsync: connection unexpectedly closed (28 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(605)
[sender=3.0.9]

Firstly, is this expected?

If not, then does anyone have any suggestions on where to start digging?

I'm using erasure encoding (4+1, 50 x 8TB drives over 5 servers), with an
nvme hot pool of 4 drives (2 x replication).

I've tried both Kraken (release), and the latest Luminous Dev.

many thanks,

Jake
-- 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux