Suppose a device has logical block size of 512 bytes and physical block size of 4096 bytes ("AF 512e"). If it has unmap granularity reported to be 1, then its discard granularity should be 1 * 512 = 512 bytes. However, with max() it is set to 4096 bytes instead. With min_not_zero(), it is CORRECTLY set to 512 bytes. If it does not has unmap granularity reported, then with max() its discard granularity is set to 4096 bytes. But with min_not_zero(), it will ALSO be set to 4096 bytes. Therefore, I don't see why max() should be used instead according to your previous mails. On 12 March 2016 at 05:41, Martin K. Petersen <martin.petersen@xxxxxxxxxx> wrote: >>>>>> "Tom" == Tom Yan <tom.ty89@xxxxxxxxx> writes: > > Tom, > > Tom> Would min_not_zero() be more proper than max()? > > That would effectively set discard_granularity to physical_block_size > regardless of whether unmap_granularity was provided or not. > > -- > Martin K. Petersen Oracle Linux Engineering -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html