2012-12-12 20:57, Vyacheslav Dubeyko <slava@xxxxxxxxxxx>: > Hi, > > On Dec 12, 2012, at 6:30 PM, Sven-Göran Bergh wrote: > > [snip] >>> >>> I think that this task hides many difficult questions. How does it >>> define what files fragmented or not? How does it measure the >>> fragmentation degree? What fragmentation degree should be a basis for >>> defragmentation activity? When does it need to detect fragmentation and >>> how to keep this knowledge? How does it make defragmentation without >>> performance degradation? >> >> These questions are of special interest if we through in the type of >> media in the discussion. Fragmentation is not a big deal on NAND-based >> media (SSD:s, memory cards, USB-sticks, etc). Defragmentation activity >> might even shorten the lifetime for such media due to the limited amount >> write/erase cycles. >> > > It is a good remark. Thank you. > > Yes, of course, it needs to remember about NAND wearing, especially, in the case > of online defragmenting. But, as you know, NILFS2 makes garbage collection. As I > understand, GC can copy some blocks from cleaning segments into new ones. So, > such copying also shorten NAND lifetime. Do you suggest not to make garbage > collection because of it? No?, obviously not :-) > We have as minimum two points for online defragmenting: (1) before flushing; (2) > during garbage collection. Thereby, if you make defragmenting before any write > then you don't shorten NAND lifetime. We need to make garbage collection > anyway. As a result, it is possible to use this activity for defragmenting also. > > Yes, of course, NAND flash has good performance for the case of random reads. > But the case of contiguous file's blocks is more better than fragmented case > anyway. First of all, during reading of fragmented file you need to generate > block address before every not sibling block's data reading. So, you will > spend more cycles for read fragmented file than for the case of contiguous > file's blocks. Secondly, because of read disturbance the random read can > force FTL to copy more erase blocks to a new ones and, as a result, to lead to > more shorter NAND lifetime. Thirdly, fragmented volume state leads to more > complex and unpredictable workloads with more intensive metadata operations. It > can degrade filesystem performance. And, finally, GC has more harder work in the > case of fragmented volume state, especially, for the case of presence of deleted > files. Ok, seems like you misunderstood my previous statement. I do not argue against defragmenting. However, there are many use cases and I just felt that NAND wearing is of great importance as SSD:s are marching in. Thus, it should be part of the discussion, so defrag is implemented in a NAND-friendly way. Trying to minimize NAND wear as well. As you pointed out above, there are many parameters in the equation and NAND is yet another one that needs to be considered. > Thereby, I think that it makes sense to implement online defragmenting for the > case of NILFS2. But, of course, it is a difficult and complex task because of > probability to degrade performance and to shorten NAND lifetime. Spot on! Totally agree. Brgdrs /S-G > With the best regards, > Vyacheslav Dubeyko. > >> Brgds >> /S-G >> > > -- > To unsubscribe from this list: send the line "unsubscribe linux-nilfs" > in > the body of a message to majordomo@xxxxxxxxxxxxxxx > More majordomo info at http://vger.kernel.org/majordomo-info.html > -- To unsubscribe from this list: send the line "unsubscribe linux-nilfs" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html