Re: [PATCH RFC] statx.2: Add stx_atomic_write_unit_max_opt

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 20/03/2025 14:12, Christoph Hellwig wrote:
On Thu, Mar 20, 2025 at 09:19:40AM +0000, John Garry wrote:
But is there value in reporting this limit? I am not sure. I am not sure
what the user would do with this info.

Align their data structures to it, e.g. size the log buffers to it.


Sure, there may be a usecase there.

So far I am just considering the DB usecase, and they know the atomic write size which they want to do, i.e. their internal page size, and align to that. If that internal page size <= this opt limit, then good.

Maybe, for example, they want to write 1K consecutive 16K pages, each
atomically, and decide to do a big 16M atomic write but find that it is
slow as bdev atomic limit is < 16M.

Maybe I should just update the documentation to mention that for XFS they
should check the mounted bdev atomic limits.

For something working on files having to figure out the underlying
block device (which is non-trivial given the various methods of
multi-device support) and then looking into block sysfs is a no-go.

So if we have any sort of use case for it we should expose the limit.


Coming back to what was discussed about not adding a new flag to fetch this limit:

> Does that actually work?  Can userspace assume all unknown statx
> fields are padded to zero?

In cp_statx, we do pre-zero the statx structure. As such, the rule "if zero, just use hard limit unit max" seems to hold.

> If so my dio read align change could have
> done away with the extra flag.

Sounds like it. Maybe this practice is not preferred, i.e. changing what the request/result mask returns.




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [NTFS 3]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [NTFS 3]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux