Re: [bug?] blk_queue_may_bounce() has the comparison max_low_pfn and max_pfn wrong way

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Oct 28, 2022 at 08:21:55PM +0100, Al Viro wrote:
> On Fri, Oct 28, 2022 at 12:51:00PM -0600, Jens Axboe wrote:
> > On 10/28/22 12:02 PM, Al Viro wrote:
> > > 	We have this:
> > > 
> > > static inline bool blk_queue_may_bounce(struct request_queue *q)
> > > {
> > >         return IS_ENABLED(CONFIG_BOUNCE) &&
> > >                 q->limits.bounce == BLK_BOUNCE_HIGH &&
> > >                 max_low_pfn >= max_pfn;
> > > }
> > > 
> > > static inline struct bio *blk_queue_bounce(struct bio *bio,
> > >                 struct request_queue *q)
> > > {
> > >         if (unlikely(blk_queue_may_bounce(q) && bio_has_data(bio)))
> > >                 return __blk_queue_bounce(bio, q);
> > >         return bio;
> > > }
> > > 
> > > Now, the last term in expression in blk_queue_may_bounce() is
> > > true only on the boxen where max_pfn is no greater than max_low_pfn.
> > > 
> > > Unless I'm very confused, that's the boxen where we don't *have*
> > > any highmem pages.
> > > 
> > > What's more, consider this:
> > > 
> > > static __init int init_emergency_pool(void)
> > > {
> > >         int ret;
> > > 
> > > #ifndef CONFIG_MEMORY_HOTPLUG
> > >         if (max_pfn <= max_low_pfn)
> > >                 return 0;
> > > #endif
> > > 
> > >         ret = mempool_init_page_pool(&page_pool, POOL_SIZE, 0);
> > >         BUG_ON(ret);
> > >         pr_info("pool size: %d pages\n", POOL_SIZE);
> > > 
> > >         init_bounce_bioset();
> > >         return 0;
> > > }
> > > 
> > > On the same boxen (assuming we've not hotplug) page_pool won't be set up
> > > at all, so we wouldn't be able to bounce any highmem page if we ever
> > > ran into one.
> > > 
> > > AFAICS, this condition is backwards - it should be
> > > 
> > > static inline bool blk_queue_may_bounce(struct request_queue *q)
> > > {
> > >         return IS_ENABLED(CONFIG_BOUNCE) &&
> > >                 q->limits.bounce == BLK_BOUNCE_HIGH &&
> > >                 max_low_pfn < max_pfn;
> > > }
> > > 
> > > What am I missing here?
> > 
> > I don't think you're missing anything, the case we need it for is when
> > max_pfn > max_low_pfn. I wonder when this got botched? Because some
> > of those statements date back probably about 20 years when we started
> > allowing highmem pages to do IO.
> 
> 9bb33f24abbd "block: refactor the bounce buffering code" about a year ago.
> 
> But fixing the test alone is not going to be enough - just a quick look
> through __blk_queue_bounce() catches an obvious bug on write (we copy
> the part of highmem page we covered by bio into the beginning of the
> bounce page - and leave ->bv_offset as-is) as well as a possible bug in
> ->bi_status handling (if we really can run into bio_split() there).
> 
> I wonder if we ought to simply add "depends on BROKEN" for CONFIG_BOUNCE... ;-/

BTW, ->bi_status propagation looks brittle in general.  Are there any
places other than bio_init() where we would want to store 0 in ->bi_status
of anything?

Look at e.g. null_blk; AFAICS, we store to ->bi_status on any request
completion (in NULL_Q_BIO case, at least).  What happens if request
gets split and split-off part finishes first with an error?  AFAICS,
its ->bi_status will be copied to parent (original bio, the one that
covers the tail).  Now the IO on the original bio is over as well
and we hit drivers/block/null_blk/main.c:end_cmd().  Suppose this
part succeeds; won't we end up overwriting ->bi_status with zero
and assuming that the entire thing had succeeded, despite the
(now lost) error on the split-off part?



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux