Re: Orangefs ABI documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 > Objections?

Heck no... I've been trying to keep from changing the protocol so as to
avoid making a whole nother project out of keeping the out-of-tree
Frankenstein version of the kernel module going, but getting this version
of the kernel module upstream and getting it infused with ideas from you
depth-of-knowledge folks is the real goal here.

You're talking about changing orangefs_kernel_op_s (pvfs2_kernel_op_t
out of tree) and it doesn't cross the boundary into userspace... even if
it did, that "completion" structure looks like it has been been around
as long as any of the Linux versions we try to run on...

-Mike

On Mon, Feb 8, 2016 at 10:32 PM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:
> On Mon, Feb 08, 2016 at 11:35:35PM +0000, Al Viro wrote:
>
>> And AFAICS
>>         if (is_daemon_in_service() < 0) {
>>                 /*
>>                  * By incrementing the per-operation attempt counter, we
>>                  * directly go into the timeout logic while waiting for
>>                  * the matching downcall to be read
>>                  */
>>                 gossip_debug(GOSSIP_WAIT_DEBUG,
>>                              "%s:client core is NOT in service(%d).\n",
>>                              __func__,
>>                              is_daemon_in_service());
>>                 op->attempts++;
>>         }
>> is inherently racy.  Suppose the daemon gets stopped during the window
>> between that check and the moment request is added to the queue.  If it had
>> been stopped past that window, op would've been purged; if it had been
>> stopped before the check, we get ->attempts bumped, but if it happens _within_
>> that window, in wait_for_matching_downcall() you'll be getting to
>>                 /*
>>                  * if this was our first attempt and client-core
>>                  * has not purged our operation, we are happy to
>>                  * simply wait
>>                  */
>>                 if (op->attempts == 0 && !op_state_purged(op)) {
>>                         spin_unlock(&op->lock);
>>                         schedule();
>> with nothing to wake you up until the new daemon gets started.  Sure, it's
>> an interruptible sleep, so you can always kill -9 the sucker, but it still
>> looks wrong.
>
> BTW, what is wait_for_matching_downcall() trying to do if the purge happens
> just as it's being entered?  We put ourselves on op->waitq.  Then we check
> op_state_serviced() - OK, it hadn't been.  We check for signals; normally
> there won't be any.  Then we see that the state is purged and proceed to
> do schedule_timeout(op_timeout_secs * HZ).  Which will be a plain and simple
> "sleep for 10 seconds", since the wakeup on op->waitq has happened on purge.
> Before we'd done prepare_to_wait().  That'll be followed by returning
> -ETIMEDOUT and failing service_operation() with no attempts to retry.
> OTOH, if the purge happens a few cycles later, after we'd entered
> the loop and done prepare_to_wait(), we'll end up in retry logics.
> Looks bogus...
>
> FWIW, what all these bugs show is that manual use of wait queues is very
> easy to get wrong.  If nothing else, I'd be tempted to replace op->waitq
> with another struct completion and kill those loops in wait_for_..._downcall()
> entirely - wait_for_completion_interruptible_timeout() *ONCE* and then
> look at the state.  With complete() done in op_set_state_{purged,serviced}...
> Objections?
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux