Hi, > Thanks for reviewing this. I think the real solution to this is > that f2fs should be using large folios. That way, the page cache > will keep track of dirtiness on a per-folio basis, and if your folios > are at least as large as your cluster size, you won't need to do the > f2fs_prepare_compress_overwrite() dance. And you'll get at least fifteen > dirty folios per call instead of fifteen dirty pages, so your costs will > be much lower. > > Is anyone interested in doing the work to convert f2fs to support > large folios? I can help, or you can look at the work done for XFS, > AFS and a few other filesystems. Seems like an interesting job. Not sure if I can be of any help. What needs to be done currently to support large folio? Are there any roadmaps and reference documents. Thx, Yangtao