On 1/18/2024 3:21 AM, Dave Chinner wrote: > On Wed, Jan 17, 2024 at 12:58:12PM +0100, Javier González wrote: >> On 16.01.2024 11:39, Viacheslav Dubeyko wrote: >>>> On Jan 15, 2024, at 8:54 PM, Javier González <javier.gonz@xxxxxxxxxxx> wrote: >>>>> How FDP technology can improve efficiency and reliability of >>>>> kernel-space file system? >>>> >>>> This is an open problem. Our experience is that making data placement >>>> decisions on the FS is tricky (beyond the obvious data / medatadata). If >>>> someone has a good use-case for this, I think it is worth exploring. >>>> F2FS is a good candidate, but I am not sure FDP is of interest for >>>> mobile - here ZUFS seems to be the current dominant technology. >>>> >>> >>> If I understand the FDP technology correctly, I can see the benefits for >>> file systems. :) >>> >>> For example, SSDFS is based on segment concept and it has multiple >>> types of segments (superblock, mapping table, segment bitmap, b-tree >>> nodes, user data). So, at first, I can use hints to place different segment >>> types into different reclaim units. >> >> Yes. This is what I meant with data / metadata. We have looked also into >> using 1 RUH for metadata and rest make available to applications. We >> decided to go with a simple solution to start with and complete as we >> see users. > > XFS has an abstract type definition for metadata that is uses to > prioritise cache reclaim (i.e. classifies what metadata is more > important/hotter) and that could easily be extended to IO hints > to indicate placement. That sounds very useful. > We also have a separate journal IO path, and that is probably the > hotest LBA region of the filesystem (circular overwrite region) > which would stand to have it's own classification as well. In the past, I saw nice wins after separating the journal in XFS and Ext4 [1]. This is low-effort, high-gain item. [1]https://www.usenix.org/system/files/conference/fast18/fast18-rho.pdf