Re: CephFS file contains garbage zero padding after an unclean cluster shutdown

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



No, wait. Which system did kernel panic? Your CephFS client running rsync?
In this case this would be expected behavior because rsync doesn't
sync on every block and you lost your file system cache.

-- 
Paul Emmerich

Looking for help with your Ceph cluster? Contact us at https://croit.io

croit GmbH
Freseniusstr. 31h
81247 München
www.croit.io
Tel: +49 89 1896585 90

Am So., 25. Nov. 2018 um 16:16 Uhr schrieb Paul Emmerich
<paul.emmerich@xxxxxxxx>:
>
> Maybe rsync called fallocate() on the file?
>
> Paul
>
> --
> Paul Emmerich
>
> Looking for help with your Ceph cluster? Contact us at https://croit.io
>
> croit GmbH
> Freseniusstr. 31h
> 81247 München
> www.croit.io
> Tel: +49 89 1896585 90
>
> Am Fr., 23. Nov. 2018 um 16:55 Uhr schrieb Hector Martin
> <hector@xxxxxxxxxxxxxx>:
> >
> > Background: I'm running single-node Ceph with CephFS as an experimental
> > replacement for "traditional" filesystems. In this case I have 11 OSDs,
> > 1 mon, and 1 MDS.
> >
> > I just had an unclean shutdown (kernel panic) while a large (>1TB) file
> > was being copied to CephFS (via rsync). Upon bringing the system back
> > up, I noticed that the (incomplete) file has about 320MB worth of zeroes
> > at the end.
> >
> > This is the kind of behavior I would expect of traditional local
> > filesystems, where file metadata was updated to reflect the new size of
> > a growing file before disk extents were allocated and filled with data,
> > so an unclean shutdown results in files with tails of zeroes, but I'm
> > surprised to see it with Ceph. I expected the OSD side of things should
> > be atomic with all the BlueStore goodness, checksums, etc. I figured
> > CephFS would build upon those primitives in a way that this kind of
> > inconsistency isn't possible.
> >
> > Is this expected behavior? It's not a huge dealbreaker, but I'd like to
> > understand how this kind of situation happens in CephFS (and how it
> > could affect a proper cluster, if at all - can this happen if e.g. a
> > client, or an MDS, or an OSD dies uncleanly? Or only if several things
> > go down at once?)
> >
> > --
> > Hector Martin (hector@xxxxxxxxxxxxxx)
> > Public Key: https://mrcn.st/pub
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@xxxxxxxxxxxxxx
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux