Re: [RFC PATCH 14/39] KVM: guest_memfd: hugetlb: initialization and cleanup

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Peter Xu <peterx@xxxxxxxxxx> writes:

> On Tue, Sep 10, 2024 at 11:43:45PM +0000, Ackerley Tng wrote:
>> +/**
>> + * Removes folios in range [@lstart, @lend) from page cache of inode, updates
>> + * inode metadata and hugetlb reservations.
>> + */
>> +static void kvm_gmem_hugetlb_truncate_folios_range(struct inode *inode,
>> +						   loff_t lstart, loff_t lend)
>> +{
>> +	struct kvm_gmem_hugetlb *hgmem;
>> +	struct hstate *h;
>> +	int gbl_reserve;
>> +	int num_freed;
>> +
>> +	hgmem = kvm_gmem_hgmem(inode);
>> +	h = hgmem->h;
>> +
>> +	num_freed = kvm_gmem_hugetlb_filemap_remove_folios(inode->i_mapping,
>> +							   h, lstart, lend);
>> +
>> +	gbl_reserve = hugepage_subpool_put_pages(hgmem->spool, num_freed);
>> +	hugetlb_acct_memory(h, -gbl_reserve);
>
> I wonder whether this is needed, and whether hugetlb_acct_memory() needs to
> be exported in the other patch.
>
> IIUC subpools manages the global reservation on its own when min_pages is
> set (which should be gmem's case, where both max/min set to gmem size).
> That's in hugepage_put_subpool() -> unlock_or_release_subpool().
>

Thank you for pointing this out! You are right and I will remove
hugetlb_acct_memory() from here.

>> +
>> +	spin_lock(&inode->i_lock);
>> +	inode->i_blocks -= blocks_per_huge_page(h) * num_freed;
>> +	spin_unlock(&inode->i_lock);
>> +}




[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]

  Powered by Linux