Re: [LSF/MM ATTEND] Un-addressable device memory and block/fs implications

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 





On 01/16/2017 04:04 AM, Anshuman Khandual wrote:
On 12/16/2016 08:44 AM, Aneesh Kumar K.V wrote:
Jerome Glisse <jglisse@xxxxxxxxxx> writes:

I would like to discuss un-addressable device memory in the context of
filesystem and block device. Specificaly how to handle write-back, read,
... when a filesystem page is migrated to device memory that CPU can not
access.

I intend to post a patchset leveraging the same idea as the existing
block bounce helper (block/bounce.c) to handle this. I believe this is
worth discussing during summit see how people feels about such plan and
if they have better ideas.


I also like to join discussions on:
  - Peer-to-Peer DMAs between PCIe devices

Yes! This is looming large, because we keep insisting on building new computers with a *lot* of GPUs in them, and then connect them up with NICs as well, and oddly enough, people keep trying to do pee-to-peer between GPUs, and from GPUs to NICs, etc. :) It feels like the linux-rdma and linux-pci discussions in the past sort of stalled, due to not being certain of the long-term direction of the design. So it's worth coming up with that.



  - CDM coherent device memory
  - PMEM
  - overall mm discussions
I would like to attend this discussion. I can talk about coherent device
memory and how having HMM handle that will make it easy to have one
interface for device driver. For Coherent device case we definitely need
page cache migration support.

I have been in the discussion on the mailing list about HMM since V13 which
got posted back in October. Touched upon many points including how it changes
ZONE_DEVICE to accommodate un-addressable device memory, migration capability
of currently supported ZONE_DEVICE based persistent memory etc. Looked at the
HMM more closely from the perspective whether it can also accommodate coherent
device memory which has been already discussed by others on this thread. I too
would like to attend to discuss more on this topic.

Also, on the huge page points (mentioned early in this short thread): some of our GPUs could, at times, match the CPU's large/huge page sizes. It is a delicate thing to achieve, but moving around, say, 2 MB pages between CPU and GPU would be, for some workloads, really fast.

I should be able to present performance numbers for HMM on Pascal GPUs, so if anyone would like that, let me know in advance of any particular workloads or configurations that seem most interesting, and I'll gather that.

Also would like to attend this one.

thanks
John Hubbard
NVIDIA


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@xxxxxxxxx.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@xxxxxxxxx";> email@xxxxxxxxx </a>



[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]