Re: Orangefs ABI documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



 > what should the daemon see in such situation?

I'll work to see if I can get an opinion on that from
some of the others...

As to the traces of AIO... I'm not sure how that ever worked.
The out of tree kernel module never had the
address_space_operations direct_IO call-out, I think
AIO, the way it works in modern kernels, requires that?

I fooled with it a couple of years ago when I had
Christoph to mentor me, but implementing it and
getting it to work right were hard and didn't
seem as important as other stuff. How the
stuff that was in there was supposed to work seems
kind of lost-to-history. Perhaps the stuff that was
there was designed to work with that libaio userspace
library, but not the io_setup, io_destroy, io_submit
etc. stuff that is in modern kernels?

-Mike

On Tue, Feb 9, 2016 at 4:06 PM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:
> On Tue, Feb 09, 2016 at 05:40:49PM +0000, Al Viro wrote:
>
>> Could you try, on top of those fixes, comment the entire
>>         if (op->downcall.type == ORANGEFS_VFS_OP_FILE_IO) {
>>                 long n = wait_for_completion_interruptible_timeout(&op->done,
>>                                                         op_timeout_secs * HZ);
>>                 if (unlikely(n < 0)) {
>>                         gossip_debug(GOSSIP_DEV_DEBUG,
>>                                 "%s: signal on I/O wait, aborting\n",
>>                                 __func__);
>>                 } else if (unlikely(n == 0)) {
>>                         gossip_debug(GOSSIP_DEV_DEBUG,
>>                                 "%s: timed out.\n",
>>                                 __func__);
>>                 }
>>         }
>> in orangefs_devreq_write_iter() out and see if the corruption happens?
>
> Another thing: what's the protocol rules regarding the cancels?  The current
> code looks very odd - if we get a hit by a signal after the daemon has
> picked e.g. read request but before it had replied, we will call
> orangefs_cancel_op_in_progress(), which will call service_operation() with
> ORANGEFS_OP_CANCELLATION which will.  And that'll insert the cancel request
> into list and practically immediately notice that we have a pending signal,
> remove the cancel request from the list and bugger off.  With daemon almost
> certainly *not* getting to see it at all.
>
> I've asked that before if anybody has explained that, I've missed that reply.
> How the fuck is that supposed to work?  Forget the kernel-side implementation
> details, what should the daemon see in such situation?
>
> I would expect something like "you can't reuse a slot until operation has
> been either completed or purged or a cancel had been sent and ACKed by
> the daemon".  Is that what is intended?  If so, the handling of cancels might
> be better off asynchronous - let the slot freeing be done after the cancel
> had been ACKed and _not_ in the context of original syscall...
>
> There are some traces of AIO support in that thing; could this be a victim of
> trimming async parts for submission into the mainline?
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux