David Buckley <dbuckley@xxxxxxxxxxx> writes: David, > They result in discard granularity being forced to logical block size > if the disk reports LBPRZ is enabled (which the netapp luns do). Block zeroing and unmapping are currently sharing some plumbing and that has lead to some compromises. In this case a bias towards ensuring data integrity for zeroing at the expense of not aligning unmap requests. Christoph has worked on separating those two functions. His code is currently under review. > I'm not sure of the implications of either the netapp changes, though > reporting 4k logical blocks seems potential as this is supported in > newer OS at least. Yes, but it may break legacy applications that assume a 512-byte logical block size. > The sd change potentially would at least partially undo the patches > referenced above. But it would seem that (assuming an aligned > filesystem with 4k blocks and minimum_io_size=4096) there is no > possibility of a partial block discard or advantage to sending the > discard requests in 512 blocks? The unmap granularity inside a device is often much, much bigger than 4K. So aligning to that probably won't make a difference. And it's imperative to filesystems that zeroing works at logical block size granularity. The expected behavior for a device is that it unmaps whichever full unmap granularity chunks are described by a received request. And then explicitly zeroes any partial chunks at the head and tail. So I am surprised you see no reclamation whatsoever. With the impending zero/unmap separation things might fare better. But I'd still like to understand the behavior you observe. Please provide the output of: sg_vpd -p lbpv /dev/sdN sg_vpd -p bl /dev/sdN for one of the LUNs and I'll take a look. Thanks! -- Martin K. Petersen Oracle Linux Engineering