On Thu, Jul 6, 2023 at 3:06 PM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote: > > On 07/06/23 11:25, Jiaqi Yan wrote: > > On Wed, Jul 5, 2023 at 4:57 PM Mike Kravetz <mike.kravetz@xxxxxxxxxx> wrote: > > > On 06/23/23 16:40, Jiaqi Yan wrote: > > > > > > > > +bool is_raw_hwp_subpage(struct folio *folio, struct page *subpage) > > > > +{ > > > > + bool ret; > > > > + > > > > + spin_lock_irq(&hugetlb_lock); > > > > + ret = __is_raw_hwp_subpage(folio, subpage); > > > > + spin_unlock_irq(&hugetlb_lock); > > > > > > Can you describe what races the hugetlb_lock prevents here? > > > > I think we should sync here with __get_huge_page_for_hwpoison, who > > iterates and inserts an entry to raw_hwp_list. llist itself doesn't > > ensure insertion is synchronized with iterating from > > __is_raw_hwp_subpage. > > > > Ok, makes sense. And, since this is only called in the file read patch > when we encounter a PageHWPoison(page), the overhead of the lock cycles > is not of concern. Yes, thanks for pointing this out, (which I forgot), as is_raw_hwp_subpage() is after PageHWPoison check in hugetlbfs_read_iter. I think both this and the reason why holding the lock is worth mentioning in the commit msg. > > You can add, > > Reviewed-by: Mike Kravetz <mike.kravetz@xxxxxxxxxx> > -- > Mike Kravetz