Re: Could it be made possible to offer "supplementary" data to a DIO write ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote:

> > Say, for example, I need to write a 3-byte change from a page, where that
> > page is part of a 256K sequence in the pagecache.  Currently, I have to
> > round the 3-bytes out to DIO size/alignment, but I could say to the API,
> > for example, "here's a 256K iterator - I need bytes 225-227 written, but
> > you can write more if you want to"?
> 
> I think you're optimising the wrong thing.  No actual storage lets you
> write three bytes.  You're just pushing the read/modify/write cycle to
> the remote end.  So you shouldn't even be tracking that three bytes have
> been dirtied; you should be working in multiples of i_blocksize().

I'm dealing with network filesystems that don't necessarily let you know what
i_blocksize is.  Assume it to be 1.

Further, only sending, say, 3 bytes and pushing RMW to the remote end is not
necessarily wrong for a network filesystem for at least two reasons: it
reduces the network loading and it reduces the effects of third-party write
collisions.

> I don't know of any storage which lets you ask "can I optimise this
> further for you by using a larger size".  Maybe we have some (software)
> compressed storage which could do a better job if given a whole 256kB
> block to recompress.

It would offer an extent-based filesystem the possibility of adjusting its
extent list.  And if you were mad enough to put your cache on a shingled
drive...  (though you'd probably need a much bigger block than 256K to make
that useful).  Also, jffs2 (if someone used that as a cache) can compress its
blocks.

> So it feels like you're both tracking dirty data at too fine a granularity,
> and getting ahead of actual hardware capabilities by trying to introduce a
> too-flexible API.

We might not know what the h/w caps are and there may be multiple destination
servers with different h/w caps involved.  Note that NFS and AFS in the kernel
both currently track at byte granularity and only send the bytes that changed.
The expense of setting up the write op on the server might actually outweigh
the RMW cycle.  With something like ceph, the server might actually have a
whole-object RMW/COW, say 4M.

Yet further, if your network fs has byte-range locks/leases and you have a
write lock/lease that ends part way into a page, when you drop that lock/lease
you shouldn't flush any data outside of that range lest you overwrite a range
that someone else has a lock/lease on.

David




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux