Re: rsync kernel client cepfs mkstemp no space left on device

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, Dec 11, 2016 at 4:38 PM, Mike Miller <millermike287@xxxxxxxxx> wrote:
> Hi,
>
> you have given up too early. rsync is not a nice workload for cephfs, in
> particular, most linux kernel clients cephfs will end up caching all
> inodes/dentries. The result is that mds servers crash due to memory
> limitations. And rsync basically screens all inodes/dentries so it is the
> perfect application to gobble up all inode caps.

While historically there have been client bugs that prevented the MDS
from enforcing cache size limits, this is not expected behaviour --
manually calling drop_caches is most definitely a workaround and not
something that I would recommend unless you're stuck with a
known-buggy client version for some reason.

Just felt the need to point that out in case people started picking
this up as a best practice!

Cheers,
John

> We run a cronjob script flush_cache every few (2-5) minutes:
>
> #!/bin/bash
> echo 2 > /proc/sys/vm/drop_caches
>
> on all machines that mount cephfs. There is no performance drop in the
> client machines, but happily, the mds congestion is solved by this.
>
> We also went the rbd way before this, but for large rbd images we much
> prefer cephfs instead.
>
> Regards,
>
> Mike
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux