On 18 May 2018 at 19:18, Christoph Hellwig <hch@xxxxxx> wrote: > If a driver uses the dma API (as indicated by a device with a dma mask) > we can rely on the dma mapping API to do any required bounce buffering, > and all drivers using bounce buffering or pio now either use the proper > highmem-aware accessors or depend on !HIGHMEM. Considered that we have a few other mmc host drivers to convert to the highmem accessors, I need to postpone this one until all have been fixed, right? Well, unless the rest uses the !HIGHMEM depend option!? Kind regards Uffe > > Signed-off-by: Christoph Hellwig <hch@xxxxxx> > --- > drivers/mmc/core/queue.c | 5 ----- > 1 file changed, 5 deletions(-) > > diff --git a/drivers/mmc/core/queue.c b/drivers/mmc/core/queue.c > index 56e9a803db21..a18541930c01 100644 > --- a/drivers/mmc/core/queue.c > +++ b/drivers/mmc/core/queue.c > @@ -351,17 +351,12 @@ static const struct blk_mq_ops mmc_mq_ops = { > static void mmc_setup_queue(struct mmc_queue *mq, struct mmc_card *card) > { > struct mmc_host *host = card->host; > - u64 limit = BLK_BOUNCE_HIGH; > - > - if (mmc_dev(host)->dma_mask && *mmc_dev(host)->dma_mask) > - limit = (u64)dma_max_pfn(mmc_dev(host)) << PAGE_SHIFT; > > blk_queue_flag_set(QUEUE_FLAG_NONROT, mq->queue); > blk_queue_flag_clear(QUEUE_FLAG_ADD_RANDOM, mq->queue); > if (mmc_can_erase(card)) > mmc_queue_setup_discard(mq->queue, card); > > - blk_queue_bounce_limit(mq->queue, limit); > blk_queue_max_hw_sectors(mq->queue, > min(host->max_blk_count, host->max_req_size / 512)); > blk_queue_max_segments(mq->queue, host->max_segs); > -- > 2.17.0 > > -- > To unsubscribe from this list: send the line "unsubscribe linux-mmc" in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html -- To unsubscribe from this list: send the line "unsubscribe linux-mmc" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html