Are there any repercussions to configuring this on an existing large fs?
On Wed, May 24, 2017 at 1:36 PM, John Spray <jspray@xxxxxxxxxx> wrote:
On Wed, May 24, 2017 at 7:19 PM, Jake Grimmett <jog@xxxxxxxxxxxxxxxxx> wrote:
> Dear All,
>
> I've been testing out cephfs, and bumped into what appears to be an upper
> file size limit of ~1.1TB
>
> e.g:
>
> [root@cephfs1 ~]# time rsync --progress -av /ssd/isilon_melis.tar
> /ceph/isilon_melis.tar
> sending incremental file list
> isilon_melis.tar
> 1099341824000 54% 237.51MB/s 1:02:05
> rsync: writefd_unbuffered failed to write 4 bytes to socket [sender]:
> Broken pipe (32)
> rsync: write failed on "/ceph/isilon_melis.tar": File too large (27)
> rsync error: error in file IO (code 11) at receiver.c(322) [receiver=3.0.9]
> rsync: connection unexpectedly closed (28 bytes received so far) [sender]
> rsync error: error in rsync protocol data stream (code 12) at io.c(605)
> [sender=3.0.9]
>
> Firstly, is this expected?
CephFS has a configurable maximum file size, it's 1TB by default.
Change it with:
ceph fs set <fs name> max_file_size <size in bytes>
John
>
> If not, then does anyone have any suggestions on where to start digging?
>
> I'm using erasure encoding (4+1, 50 x 8TB drives over 5 servers), with an
> nvme hot pool of 4 drives (2 x replication).
>
> I've tried both Kraken (release), and the latest Luminous Dev.
>
> many thanks,
>
> Jake
> --
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com