Re: [GIT PULL] Block updates for 6.9-rc1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Mar 12, 2024 at 11:22:53AM -0400, Mike Snitzer wrote:
> blk_validate_limits() is currently very pedantic. I discussed with Jens
> briefly and we're thinking it might make sense for blk_validate_limits()
> to be more forgiving by _not_ imposing hard -EINVAL failure.  That in
> the interim, during this transition to more curated and atomic limits, a
> WARN_ON_ONCE() splat should serve as enough notice to developers (be it
> lower level nvme or higher-level virtual devices like DM).

I guess.  And it more closely matches the status quo.  That being said
I want to move to hard rejection eventually to catch all the issues.

> BUT for this specific max_segment_size case, the constraints of dm-crypt
> are actually more conservative due to crypto requirements.

Honestly, to me the dm-crypt requirement actually doesn't make much
sense: max_segment_size is for hardware drivers that have requirements
for SGLs or equivalent hardware interfaces.  If dm-crypt never wants to
see more than a single page per bio_vec it should just always iterate
them using bio_for_each_segment.

> Yet nvme's
> more general "don't care, but will care if non-nvme driver does" for
> this particular max_segment_size limit is being imposed when validating
> the combined limits that dm-crypt will impose at the top-level.

The real problem is that we combine the limits while we shouldn't.
Every since we've supported immutable biovecs and do the splitting
down in blk-mq there is no point to even inherit such limits in the
upper drivers.





[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux