On Mon, Feb 19, 2024 at 01:01:07PM +0000, John Garry wrote: > Add initial support for atomic writes. > > As is standard method, feed device properties via modules param, those > being: > - atomic_max_size_blks > - atomic_alignment_blks > - atomic_granularity_blks > - atomic_max_size_with_boundary_blks > - atomic_max_boundary_blks > > These just match sbc4r22 section 6.6.4 - Block limits VPD page. > > We just support ATOMIC WRITE (16). > > The major change in the driver is how we lock the device for RW accesses. > > Currently the driver uses a per-device lock for accessing device metadata > and "media" data (calls to do_device_access()) atomically for the duration > of the whole read/write command. > > This should not suit verifying atomic writes. Reason being that currently > all reads/writes are atomic, so using atomic writes does not prove > anything. > > Change device access model to basis that regular writes only atomic on a > per-sector basis, while reads and atomic writes are fully atomic. > > As mentioned, since accessing metadata and device media is atomic, > continue to have regular writes involving metadata - like discard or PI - > as atomic. We can improve this later. > > Currently we only support model where overlapping going reads or writes > wait for current access to complete before commencing an atomic write. > This is described in 4.29.3.2 section of the SBC. However, we simplify, > things and wait for all accesses to complete (when issuing an atomic > write). > > Signed-off-by: John Garry <john.g.garry@xxxxxxxxxx> > --- <snip> > +#define DEF_ATOMIC_WR 0 <snip> > +static unsigned int sdebug_atomic_wr = DEF_ATOMIC_WR; <snip> > +MODULE_PARM_DESC(atomic_write, "enable ATOMIC WRITE support, support WRITE ATOMIC(16) (def=1)"); Hi John, The default value here seems to be 0 and not 1. Got me a bit confused while testing :) Regards, ojaswin > MODULE_PARM_DESC(lowest_aligned, "lowest aligned lba (def=0)"); > MODULE_PARM_DESC(lun_format, "LUN format: 0->peripheral (def); 1 --> flat address method"); > MODULE_PARM_DESC(max_luns, "number of LUNs per target to simulate(def=1)"); > @@ -6260,6 +6575,11 @@ MODULE_PARM_DESC(unmap_alignment, "lowest aligned thin provisioning lba (def=0)" > MODULE_PARM_DESC(unmap_granularity, "thin provisioning granularity in blocks (def=1)"); > MODULE_PARM_DESC(unmap_max_blocks, "max # of blocks can be unmapped in one cmd (def=0xffffffff)"); > MODULE_PARM_DESC(unmap_max_desc, "max # of ranges that can be unmapped in one cmd (def=256)"); > +MODULE_PARM_DESC(atomic_wr_max_length, "max # of blocks can be atomically written in one cmd (def=8192)"); > +MODULE_PARM_DESC(atomic_wr_align, "minimum alignment of atomic write in blocks (def=2)"); > +MODULE_PARM_DESC(atomic_wr_gran, "minimum granularity of atomic write in blocks (def=2)"); > +MODULE_PARM_DESC(atomic_wr_max_length_bndry, "max # of blocks can be atomically written in one cmd with boundary set (def=8192)"); > +MODULE_PARM_DESC(atomic_wr_max_bndry, "max # boundaries per atomic write (def=128)"); > MODULE_PARM_DESC(uuid_ctl, > "1->use uuid for lu name, 0->don't, 2->all use same (def=0)"); > MODULE_PARM_DESC(virtual_gb, "virtual gigabyte (GiB) size (def=0 -> use dev_size_mb)"); > @@ -7406,6 +7726,7 @@ static int __init scsi_debug_init(void) > return -EINVAL; > } > } > + > xa_init_flags(per_store_ap, XA_FLAGS_ALLOC | XA_FLAGS_LOCK_IRQ); > if (want_store) { > idx = sdebug_add_store(); > @@ -7613,7 +7934,9 @@ static int sdebug_add_store(void) > map_region(sip, 0, 2); > } > > - rwlock_init(&sip->macc_lck); > + rwlock_init(&sip->macc_data_lck); > + rwlock_init(&sip->macc_meta_lck); > + rwlock_init(&sip->macc_sector_lck); > return (int)n_idx; > err: > sdebug_erase_store((int)n_idx, sip); > -- > 2.31.1 >