Re: [PATCH v3 06/10] scsi: sd_zbc: emulate ZONE_APPEND commands

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2020/03/28 18:07, hch@xxxxxxxxxxxxx wrote:
> On Sat, Mar 28, 2020 at 09:02:43AM +0000, Damien Le Moal wrote:
>> On 2020/03/28 17:51, Christoph Hellwig wrote:
>>>> Since zone reset and finish operations can be issued concurrently with
>>>> writes and zone append requests, ensure a coherent update of the zone
>>>> write pointer offsets by also write locking the target zones for these
>>>> zone management requests.
>>>
>>> While they can be issued concurrently you can't expect sane behavior
>>> in that case.  So I'm not sure why we need the zone write lock in this
>>> case.
>>
>> The behavior will certainly not be sane for the buggy application doing writes
>> and resets to the same zone concurrently (I have debugged that several time in
>> the field). So I am not worried about that at all. The zone write lock here is
>> still used to make sure the wp cache stays in sync with the drive. Without it,
>> we could have races on completion update of the wp and get out of sync.
> 
> How do the applications expect to get sane results from that in general?

They do not get sane results :) That's application bugs. I do not care about
those. What I do care is that the wp cache stays in sync with the drive so that
it itself does not become the cause of errors.

Rethinking about it though, the error processing code doing a zone report and wp
cache refresh will trigger for any write error, even those resulting from dumb
application bugs. So protection or not, since the wp cache refresh will be done,
we could simply no do zone write locking for reset and finish since these are
really expected to be done without any in-flight write.

> But if you think protecting against that is worth the effort I think
> there should be a separate patch to take the zone write lock for
> reset/finish.

OK. That would be easy to add. But from the point above, I am now trying to
convince myself that this is not necessary.

> 
>>>> +#define SD_ZBC_INVALID_WP_OFST	~(0u)
>>>> +#define SD_ZBC_UPDATING_WP_OFST	(SD_ZBC_INVALID_WP_OFST - 1)
>>>
>>> Given that this goes into the seq_zones_wp_ofst shouldn't the block
>>> layer define these values?
>>
>> We could, at least the first one. The second one is really something that could
>> be considered completely driver dependent since other drivers doing this
>> emulation may handle the updating state differently.
>>
>> Since this is the only driver where this is needed, may be we can keep this here
>> for now ?
> 
> Well, I'd rather keep magic values for a field defined in common code
> in the common code.  Having behavior details spread over different
> modules makes code rather hard to follow.
> 
>>>> +struct sd_zbc_zone_work {
>>>> +	struct work_struct work;
>>>> +	struct scsi_disk *sdkp;
>>>> +	unsigned int zno;
>>>> +	char buf[SD_BUF_SIZE];
>>>> +};
>>>
>>> Wouldn't it make sense to have one work_struct per scsi device and batch
>>> updates?  That is also query a decenent sized buffer with a bunch of
>>> zones and update them all at once?  Also given that the other write
>>> pointer caching code is in the block layer, why is this in SCSI?
>>
>> Again, because we thought this is driver dependent in the sense that other
>> drivers may want to handle invalid WP entries differently.
> 
> What sensible other strategy exists?  Nevermind that I hope we never
> see another driver.  And as above - I really want to keep behavior
> togetether instead of wiredly split over different code bases.  My
> preference would still be to have it just in sd, but you gave some good
> arguments for keeping it in the block layer.  Maybe we need to take a
> deeper look and figure out a way to keep it isolated in SCSI.

OK. We can try again to see if we can keep all this WP caching in sd. The only
pain point is the revalidation as I explained before. Everything else would stay
pretty much the same and all be scsi specific. I will dig again to see what can
be done.

> 
>> Also, I think that
>> one work struct per device may be an overkill. This is for error recovery and on
>> a normal healthy systems, write errors are rare.
> 
> I think it is less overkill than the dynamic allocation scheme with
> the mempool and slab cache, that is why I suggested it.

Ah. OK. Good point.

> 


-- 
Damien Le Moal
Western Digital Research




[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]

  Powered by Linux