On Tue, Jul 06 2010 at 11:35pm -0400, Douglas Gilbert <dgilbert@xxxxxxxxxxxx> wrote: > On 10-07-06 09:39 PM, Martin K. Petersen wrote: > >>>>>>"Mike" == Mike Snitzer<snitzer@xxxxxxxxxx> writes: > >Mike> # cat /sys/block/sda/queue/discard_granularity > >Mike> 512 > >Mike> # cat /sys/block/sda/queue/discard_max_bytes > >Mike> 4294966784 > > > >Mike> I'll look to understand why 'discard_max_bytes' is so large for > >Mike> this LUN despite the standard Block limits VPD page not reflecting > >Mike> this. > > > >discard_max_bytes is 0xFFFFFFFF for WRITE SAME(16). > > FORMAT UNIT has several associated mechanisms (e.g > IMMED bit and REQUEST SENSE polling) that let it > run for a long time. WRITE SAME has no such mechanisms. > There was a proposal put to t10 to place an upper limit > on WRITE SAME's lba count but I think that has been > dropped. IMO if we want to give large block counts to > UNMAP or WRITE SAME in the absence of guidance from the > block limits VPD page, then we need to cope with > device saying "nope". > > Whatever device Mike has it seems to be failing the > WRITE SAME(16) command due to the huge lba block count. > Does the device work with a smaller lba block count? > For example: > sg_write_same --unmap --lba 0 --num 1024 /dev/sda Yes, and even large requests that have 4K granularity work. Turns out that this LUN has a 4K granularity requirement (will fail the WRITE SAME if the granularity requirements are not met). 4294966784 % 4096 = 3584 So we need to see why Linux actually has discard_max_bytes = 4294966784 rather than the full 0xFFFFFFFF we initialize in read_capacity_16: q->limits.max_discard_sectors = 0xffffffff; By bet is on blkdev_issue_discard: unsigned int max_discard_sectors = min(q->limits.max_discard_sectors, UINT_MAX >> 9); Mike -- To unsubscribe from this list: send the line "unsubscribe linux-scsi" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html