Re: [Lsf-pc] [LSF/MM TOPIC] end-to-end data and metadata corruption detection

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 31, 2012 at 11:28:26AM -0800, Gregory Farnum wrote:
> On Tue, Jan 31, 2012 at 11:22 AM, Bernd Schubert
> <bernd.schubert@xxxxxxxxxxxxxxxxxx> wrote:
> > I guess we should talk to developers of other parallel file systems and see
> > what they think about it. I think cephfs already uses data integrity
> > provided by btrfs, although I'm not entirely sure and need to check the
> > code. As I said before, Lustre does network checksums already and *might* be
> > interested.
> 
> Actually, right now Ceph doesn't check btrfs' data integrity
> information, but since Ceph doesn't have any data-at-rest integrity
> verification it relies on btrfs if you want that. Integrating
> integrity verification throughout the system is on our long-term to-do
> list.
> We too will be said if using a kernel-level integrity system requires
> using DIO, although we could probably work out a way to do
> "translation" between our own integrity checksums and the
> btrfs-generated ones if we have to (thanks to replication).

DIO isn't really required, but doing this without synchronous writes
will get painful in a hurry.  There's nothing wrong with letting the
data sit in the page cache after the IO is done though.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux