Re: [PATCH v6 00/10] block atomic writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 4/10/24 06:05, Matthew Wilcox wrote:
On Mon, Apr 08, 2024 at 10:50:47AM -0700, Luis Chamberlain wrote:
On Fri, Apr 05, 2024 at 11:06:00AM +0100, John Garry wrote:
On 04/04/2024 17:48, Matthew Wilcox wrote:
The thing is that there's no requirement for an interface as complex as
the one you're proposing here.  I've talked to a few database people
and all they want is to increase the untorn write boundary from "one
disc block" to one database block, typically 8kB or 16kB.

So they would be quite happy with a much simpler interface where they
set the inode block size at inode creation time,
We want to support untorn writes for bdev file operations - how can we set
the inode block size there? Currently it is based on logical block size.
ioctl(BLKBSZSET), I guess?  That currently limits to PAGE_SIZE, but I
think we can remove that limitation with the bs>PS patches.

I can say a bit more on this, as I explored that. Essentially Matthew,
yes, I got that to work but it requires a set of different patches. We have
what we tried and then based on feedback from Chinner we have a
direction on what to try next. The last effort on that front was having the
iomap aops for bdev be used and lifting the PAGE_SIZE limit up to the
page cache limits. The crux on that front was that we end requiring
disabling BUFFER_HEAD and that is pretty limitting, so my old
implementation had dynamic aops so to let us use the buffer-head aops
only when using filesystems which require it and use iomap aops
otherwise. But as Chinner noted we learned through the DAX experience
that's not a route we want to again try, so the real solution is to
extend iomap bdev aops code with buffer-head compatibility.

Have you tried just using the buffer_head code?  I think you heard bad
advice at last LSFMM.  Since then I've landed a bunch of patches which
remove PAGE_SIZE assumptions throughout the buffer_head code, and while
I haven't tried it, it might work.  And it might be easier to make work
than adding more BH hacks to the iomap code.

A quick audit for problems ...

__getblk_slow:
        if (unlikely(size & (bdev_logical_block_size(bdev)-1) ||
                         (size < 512 || size > PAGE_SIZE))) {

cont_expand_zero (not used by bdev code)
cont_write_begin (ditto)

That's all I spot from a quick grep for PAGE, offset_in_page() and kmap.

You can't do a lot of buffer_heads per folio, because you'll overrun
         struct buffer_head *bh, *head, *arr[MAX_BUF_PER_PAGE];
in block_read_full_folio(), but you can certainly do _one_ buffer_head
per folio, and that's all you need for bs>PS.

Indeed; I got a patch here to just restart the submission loop if one
reaches the end of the array. But maybe submitting one bh at a time and
using plugging should achieve that same thing. Let's see.

I suspect this is a use case where perhaps the max folio order could be
set for the bdev in the future, the logical block size the min order,
and max order the large atomic.

No, that's not what we want to do at all!  Minimum writeback size needs
to be the atomic size, otherwise we have to keep track of which writes
are atomic and which ones aren't.  So, just set the logical block size
to the atomic size, and we're done.

+1. My thoughts all along.

Cheers,

Hannes
--
Dr. Hannes Reinecke                  Kernel Storage Architect
hare@xxxxxxx                                +49 911 74053 688
SUSE Software Solutions GmbH, Frankenstr. 146, 90461 Nürnberg
HRB 36809 (AG Nürnberg), GF: I. Totev, A. McDonald, W. Knoblich





[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux