On Tue, Jan 08 2019 at 10:30am -0500, Bryan Gurney <bgurney@xxxxxxxxxx> wrote: > On Mon, Jan 7, 2019 at 7:10 PM Benjamin Marzinski <bmarzins@xxxxxxxxxx> wrote: > > > > On Mon, Jan 07, 2019 at 02:31:23PM -0500, Bryan Gurney wrote: > > > + > > > +static int dust_map_read(struct dust_device *dd, sector_t thisblock, > > > + bool fail_read_on_bb) > > > +{ > > > + unsigned long flags; > > > + struct badblock *bblk; > > > + > > > + spin_lock_irqsave(&dd->dust_lock, flags); > > > + bblk = dust_rb_search(&dd->badblocklist, thisblock); > > > > I don't see the benefit of doing dust_rb_search() if fail_read_on_bb > > isn't set. It just slows the read without using the result. > > This is something that I have to admit was unintentional, but I like > it: it keeps the timing consistent between the "enabled" and > "disabled" modes (a.k.a.: the "fail_read_on_bad_block" and "bypass" > modes). > > The primary goal of this target is to be a close simulation of a drive > with bad sectors. Therefore, it would be good to have a consistent > "seek time" between the two modes. There could be test cases where a > problem only appears on a "faster" (or "slower") device, but that's > not what this test device is intended to test. > > Considering the above, can we leave this as is? (I can leave a > comment explaining the desire for timing consistency between the > "enabled" and "disabled" states.) But this extra work isn't "seek time", it is CPU time. So no, there is no benefit to doing this. > > > + if (fail_read_on_bb && bblk) { > > > + spin_unlock_irqrestore(&dd->dust_lock, flags); > > > + return DM_MAPIO_KILL; > > > + } > > > + > > > + spin_unlock_irqrestore(&dd->dust_lock, flags); > > > + return DM_MAPIO_REMAPPED; > > > +} > > > + > > > +static int dust_map_write(struct dust_device *dd, sector_t thisblock, > > > + bool fail_read_on_bb) > > > +{ > > > + unsigned long flags; > > > + struct badblock *bblk; > > > + > > > + spin_lock_irqsave(&dd->dust_lock, flags); > > > + bblk = dust_rb_search(&dd->badblocklist, thisblock); > > > + > > > + if (fail_read_on_bb && bblk) { > > > > The same goes for here as well. > > Please see my above comment for dust_map_read(). Nope, see above ;) > > > +/* > > > + * Target parameters: > > > + * > > > + * <device_path> <blksz> > > > + * > > > + * device_path: path to the block device > > > + * blksz: block size (valid choices: 512, 4096) > > > + */ > > > +static int dust_ctr(struct dm_target *ti, unsigned int argc, char **argv) > > > +{ > > > + struct dust_device *dd; > > > + const char *device_path = argv[0]; > > > + unsigned int blksz; > > > + unsigned int sect_per_block; > > > + > > > + if (argc != 2) { > > > + ti->error = "requires exactly 2 arguments"; > > > + return -EINVAL; > > > + } > > > + > > > + if (kstrtouint(argv[1], 10, &blksz) || !blksz) { > > > + ti->error = > > > + "Invalid block size parameter -- please enter 512 or 4096"; > > > + return -EINVAL; > > > + } > > > > Why check for !blksz above, when the code only allows 512 and 4096? Why > > not just check for those values in the first "if" statement? > > > > This is actually a two-stage check: the first stage checks for > non-numeric input: > > # dmsetup create dust1 --table '0 10483712 dust /dev/vdb1 foo' > stdout: > "device-mapper: reload ioctl on dust1 failed: Invalid argument" > printk: > "kernel: device-mapper: table: 253:2: dust: Invalid block size > parameter -- please enter 512 or 4096" > > ...and the second stage checks for the usable block sizes: > > # dmsetup create dust1 --table '0 10483712 dust /dev/vdb1 520' > stdout: > "device-mapper: reload ioctl on dust1 failed: Invalid argument" > printk: > "kernel: device-mapper: table: 253:2: dust: Invalid block size -- > please enter 512 or 4096" > > These different messages help to clarify whether the parameter isn't a > usable integer, or isn't an integer at all. (Perhaps the message are > a bit too similar). > > Given the above, should there still be a 2-stage check, or should this > be folded into one that simply checks whether blksz is 512 or 4096? I'm fine with it as is, but I have a different concern: why does this target need to override the queue_limits at all? What is the benefit to rigidly imposing the provided 512 or 4096 bytes for all of {logical_block,physical_block,minimum_io,optimal_io}_size? If the underlying device is a 4K native device it simply cannot handle 512b IO. At a minimum you need to verify that the underlying device can even handle 512b IO. In general, this ability to override the block core's normal limits stacking (based on underlying device capabilities) needs proper justification. Mike -- dm-devel mailing list dm-devel@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/dm-devel