Re: Orangefs ABI documentation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I compiled and tested the new patches,
they seem to work more than great, unless
it is just my imagination that the kernel
module is much faster now. I'll measure it
with more than seat-of-the-pants to see
for sure. The patches are pushed to
the for-next branch.

My "gnarly test" can get the code to flow
into wait_for_cancellation_downcall, but
never would flow past the
"if (signal_pending(current)) {" block,
though that doesn't prove anything...

I had to look at the wiki page for "cargo culting" <g>...
When Becky was working on the cancellation
problem I alluded to earlier, we talked about and
suspected the spin_lock_irqsaves in
service_operation were not appropriate...

Thanks again Al...

-Mike


On Sat, Jan 23, 2016 at 2:24 PM, Mike Marshall <hubcap@xxxxxxxxxxxx> wrote:
> OK, I'll get them momentarily...
>
> I merged your other patches, and there was a merge
> conflict I had to work around... you're working from
> an orangefs tree that lacks one commit I had made
> last week... my linux-next tree has all your patches
> through yesterday in it now...
>
> I am setting up "the gnarly test" (at home from a VM,
> though) that should cause a bunch of cancellations,
> I want to see if I can get
> wait_for_cancellation_downcall to ever
> flow past that "if (signal_pending(current)) {"
> block... if it does, that demonstrate where
> the comments conflict with the code, right?
>
> -Mike
>
> On Sat, Jan 23, 2016 at 2:10 PM, Al Viro <viro@xxxxxxxxxxxxxxxxxx> wrote:
>> On Fri, Jan 22, 2016 at 09:54:48PM -0500, Mike Marshall wrote:
>>> Well... that all seems awesome, and compiled the first
>>> time and all my quick tests on my dinky vm make
>>> it seem fine... It is Becky that recently spent a
>>> bunch of time fighting the cancellation dragons,
>>> I'll see if I can't get her to weigh in on
>>> wait_for_cancellation_downcall tomorrow.
>>>
>>> We have some gnarly tests we were running on
>>> real hardware that helped reproduce the problems
>>> she was seeing in production with Clemson's
>>> Palmetto Cluster, I'll run them, but maybe not
>>> until Monday with the ice storm...
>>
>> OK, several more pushed.  The most interesting part is probably switch
>> to real completions - you'd been open-coding them for no good reason
>> (and as always with reinventing locking primitives, asking for trouble).
>>
>> New bits just as untested as the earlier ones, of course...
--
To unsubscribe from this list: send the line "unsubscribe linux-fsdevel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [Linux Ext4 Filesystem]     [Union Filesystem]     [Filesystem Testing]     [Ceph Users]     [Ecryptfs]     [AutoFS]     [Kernel Newbies]     [Share Photos]     [Security]     [Netfilter]     [Bugtraq]     [Yosemite News]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux Cachefs]     [Reiser Filesystem]     [Linux RAID]     [Samba]     [Device Mapper]     [CEPH Development]
  Powered by Linux