Re: CephFS file contains garbage zero padding after an unclean cluster shutdown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 26, 2018 at 4:30 AM Hector Martin <hector@xxxxxxxxxxxxxx> wrote:
>
> On 26/11/2018 00.19, Paul Emmerich wrote:
> > No, wait. Which system did kernel panic? Your CephFS client running rsync?
> > In this case this would be expected behavior because rsync doesn't
> > sync on every block and you lost your file system cache.
>
> It was all on the same system. So is it expected behavior for size
> metadata to be updated non-atomically with respect to file contents
> being written when using the CephFS kernel client? I.e. after appending
> data to the file, the metadata in CephFS is updated to reflect the new
> size but the data remains in the page cache until those pages are flushed?
>

Yes, it's expected behavior. We haven't implement ordered write
(data=ordered mount option of ext4)
> --
> Hector Martin (hector@xxxxxxxxxxxxxx)
> Public Key: https://mrcn.st/pub
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux