On Wed, Sep 16, 2020 at 1:39 AM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > On Wed, Sep 16, 2020 at 01:32:46AM +0800, Muchun Song wrote: > > On Tue, Sep 15, 2020 at 11:42 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > > > > > On Tue, Sep 15, 2020 at 11:28:01PM +0800, Muchun Song wrote: > > > > On Tue, Sep 15, 2020 at 10:32 PM Matthew Wilcox <willy@xxxxxxxxxxxxx> wrote: > > > > > > > > > > On Tue, Sep 15, 2020 at 08:59:23PM +0800, Muchun Song wrote: > > > > > > This patch series will free some vmemmap pages(struct page structures) > > > > > > associated with each hugetlbpage when preallocated to save memory. > > > > > > > > > > It would be lovely to be able to do this. Unfortunately, it's completely > > > > > impossible right now. Consider, for example, get_user_pages() called > > > > > on the fifth page of a hugetlb page. > > > > > > > > Can you elaborate on the problem? Thanks so much. > > > > > > OK, let's say you want to do a 2kB I/O to offset 0x5000 of a 2MB page > > > on a 4kB base page system. Today, that results in a bio_vec containing > > > {head+5, 0, 0x800}. Then we call page_to_phys() on that (head+5) struct > > > page to get the physical address of the I/O, and we turn it into a struct > > > scatterlist, which similarly has a reference to the page (head+5). > > > > As I know, in this case, the get_user_pages() will get a reference > > to the head page (head+0) before returning such that the hugetlb > > page can not be freed. Although get_user_pages() returns the > > page (head+5) and the scatterlist has a reference to the page > > (head+5), this patch series can handle this situation. I can not > > figure out where the problem is. What I missed? Thanks. > > You freed pages 4-511 from the vmemmap so they could be used for > something else. Page 5 isn't there any more. So if you return head+5, > then when we complete the I/O, we'll look for the compound_head() of > head+5 and we won't find head. > We do not free pages 4-511 from the vmemmap. Actually, we only free pages 128-511 from the vmemmap. The 512 struct pages occupy 8 pages of physical memory. We only free 6 physical page frames to the buddy. But we will create a new mapping just like below. The virtual address of the freed pages will remap to the second page frame. So the second page frame is reused. When a hugetlbpage is preallocated, we can change the mapping to bellow. hugetlbpage struct page(8 pages) page frame(8 pages) +-----------+ ---virt_to_page---> +-----------+ mapping to +-----------+ | | | 0 | -------------> | 0 | | | | 1 | -------------> | 1 | | | | 2 | -------------> +-----------+ | | | 3 | -----------------^ ^ ^ ^ ^ | | | 4 | -------------------+ | | | | 2M | | 5 | ---------------------+ | | | | | 6 | -----------------------+ | | | | 7 | -------------------------+ | | +-----------+ | | | | +-----------+ As you can see, we reuse the first tail page. -- Yours, Muchun