Hello, Martin. On Tue, Jan 06, 2015 at 07:05:40PM -0500, Martin K. Petersen wrote: > Tejun> Isn't that kinda niche and specialized tho? > > I don't think so. There are two reasons for zeroing block ranges: > > 1) To ensure they contain zeroes on subsequent reads > > 2) To preallocate them or anchor them down on thin provisioned devices > > The filesystem folks have specifically asked to be able to make that > distinction. Hence the patch that changes blkdev_issue_zeroout(). > > You really don't want to write out gobs and gobs of zeroes and cause > unnecessary flash wear if all you care about is the blocks being in a > deterministic state. I think I'm still missing something. Are there enough cases where filesystems want to write out zeroes during operation? Earlier in the thread, it was mentioned that this is currently mostly useful for raids which need the blocks actually cleared for checksum consistency, which basically means that raid metadata handling isn't (yet) capable of just marking those (parts of) stripes as unused. If a filesystem wants to read back zeros from data blocks, wouldn't it be just marking the matching index as such? And if you take out the zeroing case, trims are just trims and whether they return 0 afterwards or not is irrelevant. There sure can be use cases where zeroing fast and securely make noticeable difference but the cases put forth till this point seem relatively weak. I mean, after all, requiring trim to zero the blocks is essentially pushing down that amount of metadata management to the device - the device would do the exactly same thing. Pushing it down the layers can definitely be beneficial especially when there's no agreed-upon metadata on the medium (so, mkfs time), but it seems kinda superflous during normal operation. What am I missing? Thanks. -- tejun -- To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html