Thanks alot for responding Martin. Forgive my ignorance; just trying to gain understanding. For example, if we find a device with a 2TiB max discard (way too high for any device to handle reasonably from what I've seen), and we make a quirk for it that brings the max discard down, how do we decide what value to bring that down to? Would we ask the hardware vendor for an optimal value? Is there some way we could decide the value? Thanks again for any help. On Fri, Jun 9, 2023 at 2:48 PM Martin K. Petersen <martin.petersen@xxxxxxxxxx> wrote: > > > John, > > > Some drive manufacturers export a very large supported max discard > > size. However, when the operating system sends I/O of the max size to > > the device, extreme I/O latency can often be encountered. Since > > hardware does not provide an optimal discard value in addition to the > > max, and there is no way to foreshadow how well a drive handles the > > large size, take the method from max_sectors setting, and use > > BLK_DEF_MAX_SECTORS to set a more reasonable default discard max. This > > should avoid the extreme latency while still allowing the user to > > increase the value for specific needs. > > What's reasonable for one device may be completely unreasonable for > another. 4 * BLK_DEF_MAX_SECTORS is *tiny* and will penalize performance > on many devices. > > If there's a problem with a device returning something that doesn't make > sense, let's quirk it. > > -- > Martin K. Petersen Oracle Linux Engineering >