Ric, > Agree, but I think that there is also a base level performance > question - how does the discard/zero perform by itself. Specifically, > we have had to punt the discard of a whole block device before mkfs > (back at RH) since it tripped up a significant number of > devices. Similar pain for small discards (say one fs page) - is it too > slow to do? Sure. Just wanted to emphasize the difference between the performance cost of executing the command and the potential future performance impact. >> WRITE SAME also has an ANCHOR flag which provides a use case we >> currently don't have fallocate plumbing for: Allocating blocks without >> caring about their contents. I.e. the blocks described by the I/O are >> locked down to prevent ENOSPC for future writes. > > Thanks for that detail! Sounds like ANCHOR in this case exposes > whatever data is there (similar I suppose to normal block device > behavior without discard for unused space)? Seems like it would be > useful for virtually provisioned devices (enterprise arrays or > something like dm-thin targets) more than normal SSD's? It is typically used to pin down important areas to ensure one doesn't get ENOSPC when writing journal or metadata. However, these are typically the areas that we deliberately zero to ensure predictable results. So I think the only case where anchoring makes much sense is on devices that do zero detection and thus wouldn't actually provision N blocks full of zeroes. -- Martin K. Petersen Oracle Linux Engineering