Re: [PATCHv3 15/41] filemap: handle huge pages in do_generic_file_read()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Nov 07, 2016 at 07:01:03AM -0800, Christoph Hellwig wrote:
> On Mon, Nov 07, 2016 at 02:13:05PM +0300, Kirill A. Shutemov wrote:
> > It looks like a huge limitation to me.
> 
> The DAX PMD fault code can live just fine with it.

There's no way out for DAX as we map backing storage directly into
userspace. There's no such limitation for page-cache. And I don't see a
point to introduce such limitation artificially.

Backing storage fragmentation can be a weight on decision whether we want
to allocate huge page, but it shouldn't be show-stopper.

> And without it performance would suck anyway.

It depends on workload, obviously.

-- 
 Kirill A. Shutemov
--
To unsubscribe from this list: send the line "unsubscribe linux-block" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux RAID]     [Linux SCSI]     [Linux ATA RAID]     [IDE]     [Linux Wireless]     [Linux Kernel]     [ATH6KL]     [Linux Bluetooth]     [Linux Netdev]     [Kernel Newbies]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Device Mapper]

  Powered by Linux