On 3/8/21 3:28 AM, Miaohe Lin wrote: > The fault_mutex hashing overhead can be avoided in truncate_op case because > page faults can not race with truncation in this routine. So calculate hash > for fault_mutex only in !truncate_op case to save some cpu cycles. > > Signed-off-by: Miaohe Lin <linmiaohe@xxxxxxxxxx> > --- > fs/hugetlbfs/inode.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index c262566f7c5d..d81f52b87bd7 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -482,10 +482,9 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, > > for (i = 0; i < pagevec_count(&pvec); ++i) { > struct page *page = pvec.pages[i]; > - u32 hash; > + u32 hash = 0; Do we need to initialize hash here? I would not bring this up normally, but the purpose of the patch is to save cpu cycles. -- Mike Kravetz > > index = page->index; > - hash = hugetlb_fault_mutex_hash(mapping, index); > if (!truncate_op) { > /* > * Only need to hold the fault mutex in the > @@ -493,6 +492,7 @@ static void remove_inode_hugepages(struct inode *inode, loff_t lstart, > * page faults. Races are not possible in the > * case of truncation. > */ > + hash = hugetlb_fault_mutex_hash(mapping, index); > mutex_lock(&hugetlb_fault_mutex_table[hash]); > } > >