Re: [PATCH v3 06/10] scsi: sd_zbc: emulate ZONE_APPEND commands

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, Mar 28, 2020 at 09:02:43AM +0000, Damien Le Moal wrote:
> On 2020/03/28 17:51, Christoph Hellwig wrote:
> >> Since zone reset and finish operations can be issued concurrently with
> >> writes and zone append requests, ensure a coherent update of the zone
> >> write pointer offsets by also write locking the target zones for these
> >> zone management requests.
> > 
> > While they can be issued concurrently you can't expect sane behavior
> > in that case.  So I'm not sure why we need the zone write lock in this
> > case.
> 
> The behavior will certainly not be sane for the buggy application doing writes
> and resets to the same zone concurrently (I have debugged that several time in
> the field). So I am not worried about that at all. The zone write lock here is
> still used to make sure the wp cache stays in sync with the drive. Without it,
> we could have races on completion update of the wp and get out of sync.

How do the applications expect to get sane results from that in general?

But if you think protecting against that is worth the effort I think
there should be a separate patch to take the zone write lock for
reset/finish.

> >> +#define SD_ZBC_INVALID_WP_OFST	~(0u)
> >> +#define SD_ZBC_UPDATING_WP_OFST	(SD_ZBC_INVALID_WP_OFST - 1)
> > 
> > Given that this goes into the seq_zones_wp_ofst shouldn't the block
> > layer define these values?
> 
> We could, at least the first one. The second one is really something that could
> be considered completely driver dependent since other drivers doing this
> emulation may handle the updating state differently.
> 
> Since this is the only driver where this is needed, may be we can keep this here
> for now ?

Well, I'd rather keep magic values for a field defined in common code
in the common code.  Having behavior details spread over different
modules makes code rather hard to follow.

> >> +struct sd_zbc_zone_work {
> >> +	struct work_struct work;
> >> +	struct scsi_disk *sdkp;
> >> +	unsigned int zno;
> >> +	char buf[SD_BUF_SIZE];
> >> +};
> > 
> > Wouldn't it make sense to have one work_struct per scsi device and batch
> > updates?  That is also query a decenent sized buffer with a bunch of
> > zones and update them all at once?  Also given that the other write
> > pointer caching code is in the block layer, why is this in SCSI?
> 
> Again, because we thought this is driver dependent in the sense that other
> drivers may want to handle invalid WP entries differently.

What sensible other strategy exists?  Nevermind that I hope we never
see another driver.  And as above - I really want to keep behavior
togetether instead of wiredly split over different code bases.  My
preference would still be to have it just in sd, but you gave some good
arguments for keeping it in the block layer.  Maybe we need to take a
deeper look and figure out a way to keep it isolated in SCSI.

> Also, I think that
> one work struct per device may be an overkill. This is for error recovery and on
> a normal healthy systems, write errors are rare.

I think it is less overkill than the dynamic allocation scheme with
the mempool and slab cache, that is why I suggested it.



[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Index of Archives]     [SCSI Target Devel]     [Linux SCSI Target Infrastructure]     [Kernel Newbies]     [IDE]     [Security]     [Git]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux ATA RAID]     [Linux IIO]     [Samba]     [Device Mapper]

  Powered by Linux