On 05/13/2015 04:31 AM, Daniel Phillips wrote: Let me be the first to catch that arithmetic error.... > Let's say our delta size is 400MB (typical under load) and we leave > a "nice big gap" of 112 MB after flushing each one. Let's say we do > two thousand of those before deciding that we have enough information > available to switch to some smarter strategy. We used one GB of a > a 4TB disk, say. The media transfer rate decreased by a factor of: > > (1 - 2/1000) = .2%. Ahem, no, we used 1/8th of the disk. The time/data rate increased from unity to 1.125, for an average of 1.0625 across the region. If we only use 1/10th of the disk instead, by not leaving gaps, then the average time/data across the region is 1.05. The difference is, 1.0625 - 1.05, so the gap strategy increases media transfer time by 1.25%, which is not significant compared to the performance deficit in question of 400%. So, same argument: change in media transfer rate is just a distraction from the original question. In any case, we probably want to start using a smarter strategy sooner than 1000 commits, maybe after ten or a hundred commits, which would make the change in media transfer rate even less relevant. The thing is, when data first starts landing on media, we do not have much information about what the long term load will be. So just analyze the clues we have in the early commits and put those early deltas onto disk in the most efficient format, which for Tux3 seems to be linear per delta. There would be exceptions, but that is the common case. Then get smarter later. The intent is to get the best of both: early efficiency, and long term nice aging behavior. I do not accept the proposition that one must be sacrificed for the other, I find that reasoning faulty. > The performance deficit in question and the difference in media rate are > three orders of magnitude apart, does that justify the term "similar or > identical?". Regards, Daniel -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html