On 3/28/19 12:11 PM, Jerome Glisse wrote: > On Thu, Mar 28, 2019 at 04:07:20AM -0700, Ira Weiny wrote: >> On Mon, Mar 25, 2019 at 10:40:02AM -0400, Jerome Glisse wrote: >>> From: Jérôme Glisse <jglisse@xxxxxxxxxx> >>> >>> Every time i read the code to check that the HMM structure does not >>> vanish before it should thanks to the many lock protecting its removal >>> i get a headache. Switch to reference counting instead it is much >>> easier to follow and harder to break. This also remove some code that >>> is no longer needed with refcounting. >>> >>> Changes since v1: >>> - removed bunch of useless check (if API is use with bogus argument >>> better to fail loudly so user fix their code) >>> - s/hmm_get/mm_get_hmm/ >>> >>> Signed-off-by: Jérôme Glisse <jglisse@xxxxxxxxxx> >>> Reviewed-by: Ralph Campbell <rcampbell@xxxxxxxxxx> >>> Cc: John Hubbard <jhubbard@xxxxxxxxxx> >>> Cc: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> >>> Cc: Dan Williams <dan.j.williams@xxxxxxxxx> >>> --- >>> include/linux/hmm.h | 2 + >>> mm/hmm.c | 170 ++++++++++++++++++++++++++++---------------- >>> 2 files changed, 112 insertions(+), 60 deletions(-) >>> >>> diff --git a/include/linux/hmm.h b/include/linux/hmm.h >>> index ad50b7b4f141..716fc61fa6d4 100644 >>> --- a/include/linux/hmm.h >>> +++ b/include/linux/hmm.h >>> @@ -131,6 +131,7 @@ enum hmm_pfn_value_e { >>> /* >>> * struct hmm_range - track invalidation lock on virtual address range >>> * >>> + * @hmm: the core HMM structure this range is active against >>> * @vma: the vm area struct for the range >>> * @list: all range lock are on a list >>> * @start: range virtual start address (inclusive) >>> @@ -142,6 +143,7 @@ enum hmm_pfn_value_e { >>> * @valid: pfns array did not change since it has been fill by an HMM function >>> */ >>> struct hmm_range { >>> + struct hmm *hmm; >>> struct vm_area_struct *vma; >>> struct list_head list; >>> unsigned long start; >>> diff --git a/mm/hmm.c b/mm/hmm.c >>> index fe1cd87e49ac..306e57f7cded 100644 >>> --- a/mm/hmm.c >>> +++ b/mm/hmm.c >>> @@ -50,6 +50,7 @@ static const struct mmu_notifier_ops hmm_mmu_notifier_ops; >>> */ >>> struct hmm { >>> struct mm_struct *mm; >>> + struct kref kref; >>> spinlock_t lock; >>> struct list_head ranges; >>> struct list_head mirrors; >>> @@ -57,6 +58,16 @@ struct hmm { >>> struct rw_semaphore mirrors_sem; >>> }; >>> >>> +static inline struct hmm *mm_get_hmm(struct mm_struct *mm) >>> +{ >>> + struct hmm *hmm = READ_ONCE(mm->hmm); >>> + >>> + if (hmm && kref_get_unless_zero(&hmm->kref)) >>> + return hmm; >>> + >>> + return NULL; >>> +} >>> + >>> /* >>> * hmm_register - register HMM against an mm (HMM internal) >>> * >>> @@ -67,14 +78,9 @@ struct hmm { >>> */ >>> static struct hmm *hmm_register(struct mm_struct *mm) >>> { >>> - struct hmm *hmm = READ_ONCE(mm->hmm); >>> + struct hmm *hmm = mm_get_hmm(mm); >> >> FWIW: having hmm_register == "hmm get" is a bit confusing... > > The thing is that you want only one hmm struct per process and thus > if there is already one and it is not being destroy then you want to > reuse it. > > Also this is all internal to HMM code and so it should not confuse > anyone. > Well, it has repeatedly come up, and I'd claim that it is quite counter-intuitive. So if there is an easy way to make this internal HMM code clearer or better named, I would really love that to happen. And we shouldn't ever dismiss feedback based on "this is just internal xxx subsystem code, no need for it to be as clear as other parts of the kernel", right? thanks, -- John Hubbard NVIDIA