On Fri, Nov 1, 2024 at 4:44 AM Jingbo Xu <jefflexu@xxxxxxxxxxxxxxxxx> wrote: > > Hi Joanne, > > Thanks for keeping pushing this forward. > > On 11/1/24 5:52 AM, Joanne Koong wrote: > > On Thu, Oct 31, 2024 at 1:06 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > >> > >> On Thu, Oct 31, 2024 at 12:06:49PM GMT, Joanne Koong wrote: > >>> On Wed, Oct 30, 2024 at 5:30 PM Shakeel Butt <shakeel.butt@xxxxxxxxx> wrote: > >> [...] > >>>> > >>>> Memory pool is a bit confusing term here. Most probably you are asking > >>>> about the migrate type of the page block from which tmp page is > >>>> allocated from. In a normal system, tmp page would be allocated from page > >>>> block with MIGRATE_UNMOVABLE migrate type while the page cache page, it > >>>> depends on what gfp flag was used for its allocation. What does fuse fs > >>>> use? GFP_HIGHUSER_MOVABLE or something else? Under low memory situation > >>>> allocations can get mixed up with different migrate types. > >>>> > >>> > >>> I believe it's GFP_HIGHUSER_MOVABLE for the page cache pages since > >>> fuse doesn't set any additional gfp masks on the inode mapping. > >>> > >>> Could we just allocate the fuse writeback pages with GFP_HIGHUSER > >>> instead of GFP_HIGHUSER_MOVABLE? That would be in fuse_write_begin() > >>> where we pass in the gfp mask to __filemap_get_folio(). I think this > >>> would give us the same behavior memory-wise as what the tmp pages > >>> currently do, > >> > >> I don't think it would be the same behavior. From what I understand the > >> liftime of the tmp page is from the start of the writeback till the ack > >> from the fuse server that writeback is done. While the lifetime of the > >> page of the page cache can be arbitrarily large. We should just make it > >> unmovable for its lifetime. I think it is fine to make the page > >> unmovable during the writeback. We should not try to optimize for the > >> bad or buggy behavior of fuse server. > >> > >> Regarding the avoidance of wait on writeback for fuse folios, I think we > >> can handle the migration similar to how you are handling reclaim and in > >> addition we can add a WARN() in folio_wait_writeback() if the kernel ever > >> sees a fuse folio in that function. > > > > Awesome, this is what I'm planning to do in v3 to address migration then: > > > > 1) in migrate_folio_unmap(), only call "folio_wait_writeback(src);" if > > src->mapping does not have the AS_NO_WRITEBACK_WAIT bit set on it (eg > > fuse folios will have that AS_NO_WRITEBACK_WAIT bit set) > > I think it's generally okay to skip FUSE pages under writeback when the > sync migrate_pages() is called in low memory context, which only tries > to migrate as many pages as possible (i.e. best effort). > > While more caution may be needed when the sync migrate_pages() is called > with an implicit hint that the migration can not fail. For example, > > ``` > offline_pages > while { > scan_movable_pages > do_migrate_range > } > ``` > > If the malicious server never completes the writeback IO, no progress > will be made in the above while loop, and I'm afraid it will be a dead > loop then. > Thanks for taking a look and sharing your thoughts. I agree. I think for this offline_pages() path, we need to handle this "TODO: fatal migration failures should bail out". For v3 I'm thinking of handling this by having some number of retries where we try do_migrate_range() but if it still doesn't succeed, to skip those pages and move onto the next. > > > > > 2) in the fuse filesystem's implementation of the > > mapping->a_ops->migrate_folio callback, return -EAGAIN if the folio is > > under writeback. > > Is there any possibility that a_ops->migrate_folio() may be called with > the folio under writeback? > > - for most pages without AS_NO_WRITEBACK_WAIT, a_ops->migrate_folio() > will be called only when Page_writeback is cleared; > - for AS_NO_WRITEBACK_WAIT pages, they are skipped if they are under > writeback > For AS_NO_WRITEBACK_WAIT_PAGES, if we skip waiting on them if they are under writeback, I think the a_ops->migrate_folio() will still get called (by migrate_pages_batch() -> migrate_folio_move() -> move_to_new_folio()). Looking at migrate_folio_unmap() some more though, I don't think we can just skip the wait call like we can for the sync(2) case. I think we need to error out here instead since after the wait call, migrate_folio_unmap() will replace the folio's page table mappings (try_to_migrate()). If we error out here, then there's no hitting a_ops->migrate_folio() when the folio is under writeback. Thanks, Joanne > -- > Thanks, > Jingbo