Hi Pavel, On 05/11/2015 03:12 PM, Pavel Machek wrote: >>> It is a fact of life that when you change one aspect of an intimately interconnected system, >>> something else will change as well. You have naive/nonexistent free space management now; when you >>> design something workable there it is going to impact everything else you've already done. It's an >>> easy bet that the impact will be negative, the only question is to what degree. >> >> You might lose that bet. For example, suppose we do strictly linear allocation >> each delta, and just leave nice big gaps between the deltas for future >> expansion. Clearly, we run at similar or identical speed to the current naive >> strategy until we must start filling in the gaps, and at that point our layout >> is not any worse than XFS, which started bad and stayed that way. > > Umm, are you sure. If "some areas of disk are faster than others" is > still true on todays harddrives, the gaps will decrease the > performance (as you'll "use up" the fast areas more quickly). That's why I hedged my claim with "similar or identical". The difference in media speed seems to be a relatively small effect compared to extra seeks. It seems that XFS puts big spaces between new directories, and suffers a lot of extra seeks because of it. I propose to batch new directories together initially, then change the allocation goal to a new, relatively empty area if a big batch of files lands on a directory in a crowded region. The "big" gaps would be on the order of delta size, so not really very big. Anyway, some people seem to have pounced on the words "naive" and "linear allocation" and jumped to the conclusion that our whole strategy is naive. Far from it. We don't just throw files randomly at the disk. We sort and partition files and metadata, and we carefully arrange the order of our allocation operations so that linear allocation produces a nice layout for both read and write. This turned out to be so much better than fiddling with the goal of individual allocations that we concluded we would get best results by sticking with linear allocation, but improve our sort step. The new plan is to partition updates into batches according to some affinity metrics, and set the linear allocation goal per batch. So for example, big files and append-type files can get special treatment in separate batches, while files that seem to be related because of having the same directory parent and being written in the same delta will continue to be streamed out using "naive" linear allocation, which is not necessarily as naive as one might think. It will take time and a lot of performance testing to get this right, but nobody should get the idea that it is any inherent design limitation. The opposite is true: we have no restrictions at all in media layout. Compared to Ext4, we do need to address the issue that data moves around when updated. This can cause rapid fragmentation. Btrfs has shown issues with that for big, randomly updated files. We want to fix it without falling back on update-in-place as Btrfs does. Actually, Tux3 already has update-in-place, and unlike Btrfs, we can switch to it for non-empty files. But we think that perfect data isolation per delta is something worth fighting for, and we would rather not force users to fiddle around with mode settings just to make something work as well as it already does on Ext4. We will tackle this issue by partitioning as above, and use a dedicated allocation strategy for such files, which are easy to detect. Metadata moving around per update does not seem to be a problem because it is all single blocks that need very little slack space to stay close to home. > Anyway... you have brand new filesystem. Of course it should be > faster/better/nicer than the existing filesystems. So don't be too > harsh with XFS people. They have done a lot of good work, but they still have a long way to go. I don't see any shame in that. Regards, Daniel -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html