Re: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before set_pte_at()

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




On 8/17/2022 7:21 PM, Muchun Song wrote:
> 
> 
>> On Aug 17, 2022, at 16:41, Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>
>> On 2022/8/17 10:53, Muchun Song wrote:
>>>
>>>
>>>> On Aug 16, 2022, at 21:05, Miaohe Lin <linmiaohe@xxxxxxxxxx> wrote:
>>>>
>>>> The memory barrier smp_wmb() is needed to make sure that preceding stores
>>>> to the page contents become visible before the below set_pte_at() write.
>>>
>>> I’m not sure if you are right. I think it is set_pte_at()’s responsibility.
>>
>> Maybe not. There're many call sites do the similar things:
>>
>> hugetlb_mcopy_atomic_pte
>> __do_huge_pmd_anonymous_page
>> collapse_huge_page
>> do_anonymous_page
>> migrate_vma_insert_page
>> mcopy_atomic_pte
>>
>> Take do_anonymous_page as an example:
>>
>> 	/*
>> 	 * The memory barrier inside __SetPageUptodate makes sure that
>> 	 * preceding stores to the page contents become visible before
>> 	 * the set_pte_at() write.
>> 	 */
>> 	__SetPageUptodate(page);
> 
> IIUC, the case here we should make sure others (CPUs) can see new page’s
> contents after they have saw PG_uptodate is set. I think commit 0ed361dec369
> can tell us more details.
> 
> I also looked at commit 52f37629fd3c to see why we need a barrier before
> set_pte_at(), but I didn’t find any info to explain why. I guess we want
> to make sure the order between the page’s contents and subsequent memory
> accesses using the corresponding virtual address, do you agree with this?
This is my understanding also. Thanks.

Regards
Yin, Fengwei

> 
> Thanks.
> 
>>
>> So I think a memory barrier is needed before the set_pte_at() write. Or am I miss something?
>>
>> Thanks,
>> Miaohe Lin
>>
>>> Take arm64 (since it is a Relaxed Memory Order model) as an example (the
>>> following code snippet is set_pte()), I see a barrier guarantee. So I am
>>> curious what issues you are facing. So I want to know the basis for you to
>>> do this change.
>>>
>>> static inline void set_pte(pte_t *ptep, pte_t pte)
>>> {
>>>        *ptep = pte;
>>>
>>>        /*
>>>         * Only if the new pte is valid and kernel, otherwise TLB maintenance
>>>         * or update_mmu_cache() have the necessary barriers.
>>>         */
>>>        if (pte_valid_not_user(pte)) {
>>>               dsb(ishst);
>>>               isb();
>>>        }
>>> }
>>>
>>> Thanks.
>>>
> 
> 




[Index of Archives]     [Linux ARM Kernel]     [Linux ARM]     [Linux Omap]     [Fedora ARM]     [IETF Annouce]     [Bugtraq]     [Linux OMAP]     [Linux MIPS]     [eCos]     [Asterisk Internet PBX]     [Linux API]

  Powered by Linux