>> >> Any update on this one? > > I apologize for my dreadful unresponsiveness. > > I've spent the last day trying to love yours, and considering how mine > might be improved; but repeatedly arrived at the conclusion that mine is > about as nice as we can manage, just needing more comment to dignify it. > > I did willingly call my find_get_entries() stopping at PageTransCompound > a hack; but now think we should just document that behavior and accept it. > The contortions of your patch come from the need to release those 14 extra > unwanted references: much better not to get them in the first place. > > Neither of us handle a failed split optimally, we treat every tail as an > opportunity to retry: which is good to recover from transient failures, > but probably excessive. And we both have to restart the pagevec after > each attempt, but at least I don't get 14 unwanted extras each time. > > What of other find_get_entries() users and its pagevec_lookup_entries() > wrapper: does an argument to select this "stop at PageTransCompound" > behavior need to be added? > > No. The pagevec_lookup_entries() calls from mm/truncate.c prefer the > new behavior - evicting the head from page cache removes all the tails > along with it, so getting the tails a waste of time there too, just as > it was in shmem_undo_range(). > > Whereas shmem_unlock_mapping() and shmem_seek_hole_data(), as they > stand, are not removing pages from cache, but actually wanting to plod > through the tails. So those two would be slowed a little, while the > pagevec collects 1 instead of 15 pages. However: if we care about those > two at all, it's clear that we should speed them up, by noticing the > PageTransCompound case and accelerating over it, instead of plodding > through the tails. Since we haven't bothered with that optimization > yet, I'm not very worried to slow them down a little until it's done. > > I must take a look at where we stand with tmpfs 64-bit ino tomorrow, > then recomment my shmem_punch_compound() patch and post it properly, > probably day after. (Reviewing it, I see I have a "page->index + > HPAGE_PMD_NR <= end" test which needs correcting - I tend to live > in the past, before v4.13 doubled the 32-bit MAX_LFS_FILESIZE.) > > I notice that this thread has veered off into QEMU ballooning > territory: which may indeed be important, but there's nothing at all > that I can contribute on that. I certainly do not want to slow down > anything important, but remain convinced that the correct filesystem > implementation for punching a hole is to punch a hole. I am not completely sure I follow all the shmem details (sorry!). But trying to "punch a partial hole punch" into a hugetlbfs page will result in the very same behavior as with shmem as of now, no? FALLOC_FL_PUNCH_HOLE: "Within the specified range, partial filesystem blocks are zeroed, and whole filesystem blocks are removed from the file." ... After a successful call, subsequent reads from this range will return zeros." So, as long as we are talking about partial blocks the documented behavior seems to be to only zero the memory. Does this patch fix "FALLOC_FL_PUNCH_HOLE does not free blocks if called in block granularity on shmem" (which would be a valid fix), or does it try to implement something that is not documented? (removing partial blocks when called in sub-block granularity) I assume the latter, in which case I would interpret "punching a hole is to punch a hole" as "punching sub-blocks will not free blocks". (if somebody could enlighten me which important piece I am missing or messing up, that would be great :) ) -- Thanks, David / dhildenb