Re: [PATCH 09/10] mm/hmm: allow to mirror vma of a file on a DAX backed filesystem

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jan 29, 2019 at 06:32:56PM -0800, Dan Williams wrote:
> On Tue, Jan 29, 2019 at 1:21 PM Jerome Glisse <jglisse@xxxxxxxxxx> wrote:
> >
> > On Tue, Jan 29, 2019 at 12:51:25PM -0800, Dan Williams wrote:
> > > On Tue, Jan 29, 2019 at 11:32 AM Jerome Glisse <jglisse@xxxxxxxxxx> wrote:
> > > >
> > > > On Tue, Jan 29, 2019 at 10:41:23AM -0800, Dan Williams wrote:
> > > > > On Tue, Jan 29, 2019 at 8:54 AM <jglisse@xxxxxxxxxx> wrote:
> > > > > >
> > > > > > From: Jérôme Glisse <jglisse@xxxxxxxxxx>
> > > > > >
> > > > > > This add support to mirror vma which is an mmap of a file which is on
> > > > > > a filesystem that using a DAX block device. There is no reason not to
> > > > > > support that case.
> > > > > >
> > > > >
> > > > > The reason not to support it would be if it gets in the way of future
> > > > > DAX development. How does this interact with MAP_SYNC? I'm also
> > > > > concerned if this complicates DAX reflink support. In general I'd
> > > > > rather prioritize fixing the places where DAX is broken today before
> > > > > adding more cross-subsystem entanglements. The unit tests for
> > > > > filesystems (xfstests) are readily accessible. How would I go about
> > > > > regression testing DAX + HMM interactions?
> > > >
> > > > HMM mirror CPU page table so anything you do to CPU page table will
> > > > be reflected to all HMM mirror user. So MAP_SYNC has no bearing here
> > > > whatsoever as all HMM mirror user must do cache coherent access to
> > > > range they mirror so from DAX point of view this is just _exactly_
> > > > the same as CPU access.
> > > >
> > > > Note that you can not migrate DAX memory to GPU memory and thus for a
> > > > mmap of a file on a filesystem that use a DAX block device then you can
> > > > not do migration to device memory. Also at this time migration of file
> > > > back page is only supported for cache coherent device memory so for
> > > > instance on OpenCAPI platform.
> > >
> > > Ok, this addresses the primary concern about maintenance burden. Thanks.
> > >
> > > However the changelog still amounts to a justification of "change
> > > this, because we can". At least, that's how it reads to me. Is there
> > > any positive benefit to merging this patch? Can you spell that out in
> > > the changelog?
> >
> > There is 3 reasons for this:
> 
> Thanks for this.
> 
> >     1) Convert ODP to use HMM underneath so that we share code between
> >     infiniband ODP and GPU drivers. ODP do support DAX today so i can
> >     not convert ODP to HMM without also supporting DAX in HMM otherwise
> >     i would regress the ODP features.
> >
> >     2) I expect people will be running GPGPU on computer with file that
> >     use DAX and they will want to use HMM there too, in fact from user-
> >     space point of view wether the file is DAX or not should only change
> >     one thing ie for DAX file you will never be able to use GPU memory.
> >
> >     3) I want to convert as many user of GUP to HMM (already posted
> >     several patchset to GPU mailing list for that and i intend to post
> >     a v2 of those latter on). Using HMM avoids GUP and it will avoid
> >     the GUP pin as here we abide by mmu notifier hence we do not want to
> >     inhibit any of the filesystem regular operation. Some of those GPU
> >     driver do allow GUP on DAX file. So again i can not regress them.
> 
> Is this really a GUP to HMM conversion, or a GUP to mmu_notifier
> solution? It would be good to boil this conversion down to the base
> building blocks. It seems "HMM" can mean several distinct pieces of
> infrastructure. Is it possible to replace some GUP usage with an
> mmu_notifier based solution without pulling in all of HMM?

Kind of both, some of the GPU driver i am converting will use HMM for
more than just this GUP reason. But when and for what hardware they
will use HMM for is not something i can share (it is up to each vendor
to announce their hardware and feature on their own timeline).

So yes you could do the mmu notifier solution without pulling HMM
mirror (note that you do not need to pull all of HMM, HMM as many
kernel config option and for this you only need to use HMM mirror).
But if you are not using HMM then you will just be duplicating the
same code as HMM mirror. So i believe it is better to share this
code and if we want to change core mm then we only have to update
HMM while keeping the API/contract with device driver intact. This
is one of the motivation behind HMM ie have it as an impedence layer
between mm and device drivers so that mm folks do not have to under-
stand every single device driver but only have to understand the
contract HMM has with all device driver that uses it.

Also having each driver duplicating this code increase the risk of
one getting a little detail wrong. The hope is that sharing same
HMM code with all the driver then everyone benefit from debugging
the same code (i am hopping i do not have many bugs left :))


> > > > Bottom line is you just have to worry about the CPU page table. What
> > > > ever you do there will be reflected properly. It does not add any
> > > > burden to people working on DAX. Unless you want to modify CPU page
> > > > table without calling mmu notifier but in that case you would not
> > > > only break HMM mirror user but other thing like KVM ...
> > > >
> > > >
> > > > For testing the issue is what do you want to test ? Do you want to test
> > > > that a device properly mirror some mmap of a file back by DAX ? ie
> > > > device driver which use HMM mirror keep working after changes made to
> > > > DAX.
> > > >
> > > > Or do you want to run filesystem test suite using the GPU to access
> > > > mmap of the file (read or write) instead of the CPU ? In that case any
> > > > such test suite would need to be updated to be able to use something
> > > > like OpenCL for. At this time i do not see much need for that but maybe
> > > > this is something people would like to see.
> > >
> > > In general, as HMM grows intercept points throughout the mm it would
> > > be helpful to be able to sanity check the implementation.
> >
> > I usualy use a combination of simple OpenCL programs and hand tailor direct
> > ioctl hack to force specific code path to happen. I should probably create
> > a repository with a set of OpenCL tests so that other can also use them.
> > I need to clean those up into something not too ugly so i am not ashame
> > of them.
> 
> That would be great, even it is messy.

I will clean them up a put something together that i am not too ashame to
push :) I am on PTO for next couple weeks so it will probably not happens
before i am back. I still should have email access.

Cheers,
Jérôme




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux