Hi! Thanks for the explanation. The behaviour (overwriting) was puzzling and suggesting serious filesystem corruption. Once we identified the scenario, we can try workarounds. Regards, J. On 02.09.2015 11:50, Yan, Zheng wrote: >> On Sep 2, 2015, at 17:11, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: >> >> Whoops, forgot to add Zheng. >> >> On Wed, Sep 2, 2015 at 10:11 AM, Gregory Farnum <gfarnum@xxxxxxxxxx> wrote: >>> On Wed, Sep 2, 2015 at 10:00 AM, Janusz Borkowski >>> <janusz.borkowski@xxxxxxxxxxxxxx> wrote: >>>> Hi! >>>> >>>> I mount cephfs using kernel client (3.10.0-229.11.1.el7.x86_64). >>>> >>>> The effect is the same when doing "echo >>" from another machine and from a >>>> machine keeping the file open. >>>> >>>> The file is opened with open( .., >>>> O_WRONLY|O_LARGEFILE|O_APPEND|O_BINARY|O_CREAT) >>>> >>>> Shell ">>" is implemented as (from strace bash -c "echo '7789' >> >>>> /mnt/ceph/test): >>>> >>>> open("/mnt/ceph/test", O_WRONLY|O_CREAT|O_APPEND, 0666) = 3 >>>> >>>> The test file had ~500KB size. >>>> >>>> Each subsequent "echo >>" writes to the start of the test file, first "echo" >>>> overwriting the original contents, next "echos" overwriting bytes written by >>>> the preceding "echo". >>> Hmmm. The userspace (ie, ceph-fuse) implementation of this is a little >>> bit racy but ought to work. I'm not as familiar with the kernel code >>> but I'm not seeing any special behavior in the Ceph code — Zheng, >>> would you expect this to work? It looks like some of the linux >>> filesystems have their own O_APPEND handling and some don't, but I >>> can't find it in the VFS either. >>> -Greg > Yes, the kernel client does not handle the case that multiple clients do append write to the same file. I will fix it soon. > > Regards > Yan, Zheng > _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com