Re: [RFC PATCH 0/6] Understanding delays due to throttling under very heavy write load

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



(resending to list)

On Fri, Feb 3, 2012 at 3:33 PM, Jim Schutt <jaschut@xxxxxxxxxx> wrote:
>
> On 02/03/2012 10:06 AM, Gregory Farnum wrote:
>>
>> On Feb 3, 2012, at 8:18 AM, Jim Schutt<jaschut@xxxxxxxxxx>  wrote:
>>
>>> On 02/02/2012 05:28 PM, Gregory Farnum wrote:
>>>>
>>>> On Thu, Feb 2, 2012 at 12:22 PM, Jim Schutt<jaschut@xxxxxxxxxx>   wrote:
>>>>>
>>>>> I found 0 instances of "waiting for commit" in all my OSD logs for my last
>>>>> run.
>>>>>
>>>>> So I never waited on the journal?
>>>>
>>>>
>>>> Looks like it. Interesting.
>>>>
>>>>
>>>>>>> So far I'm looking at two behaviours I've noticed that seem anomalous to
>>>>>>> me.
>>>>>>>
>>>>>>> One is that I instrumented ms_dispatch(), and I see it take
>>>>>>> a half-second or more several hundred times, out of several
>>>>>>> thousand messages.  Is that expected?
>>>>>>
>>>>>>
>>>>>>
>>>>>> How did you instrument it? If you wrapped the whole function it's
>>>>>> possible that those longer runs are actually chewing through several
>>>>>> messages that had to get waitlisted for some reason previously.
>>>>>> (That's the call to do_waiters().)
>>>>>
>>>>>
>>>>>
>>>>> Yep, I wrapped the whole function, and also instrumented taking osd_lock
>>>>> while I was there.  About half the time that ms_dispatch() takes more than
>>>>> 0.5 seconds, taking osd_lock is responsible for the delay.  There's two
>>>>> dispatch threads, one for ops and one for rep_ops, right?  So one's
>>>>> waiting on the other?
>>>>
>>>>
>>>> There's just one main dispatcher; no split for the ops and rep_ops .
>>>> The reason for that "dispatch_running" is that if there are requests
>>>> waiting then the tick() function will run through them if the
>>>> messenger dispatch thread is currently idle.
>>>> But it is possible for the Messenger to try and dispatch, and for that
>>>> to be blocked while some amount of (usually trivial) work is being
>>>> done by a different thread, yes. I don't think we've ever observed it
>>>> being a problem for anything other than updating OSD maps, though...
>>>
>>>
>>> Ah, OK.
>>>
>>> I guess I was confused by my log output, e.g.:
>>
>>
>> D'oh. Sorry, you confused me with your reference to repops, which
>> aren't special-cased or anything. But there are two messengers on the
>> OSD, each with their own dispatch thread. One of those messengers is
>> for clients and one is for other OSDs.
>>
>> And now that you point that out, I wonder if the problem is lack of
>> Cond signaling in ms_dispatch. I'm on my phone right now but I believe
>> there's a chunk of commented-out code (why commented instead of
>> deleted? I don't know) that we want to uncomment for reasons that will
>> become clear when you look at it. :)
>> Try that and see how many of your problems disappear?
>>
>
> So I cherry-picked Sage's commit 7641a0e171f onto the code
> I've been running (1fe75ee6419 + some debug stuff), and saw
> no obvious difference in behaviour.
>
> I also tested Sage's suggestion of separating journals and
> data, by putting two journal partitions on half my disks,
> and two data partitions on the other half.  I made the data
> partitions relatively small (~200 GiB each on a 1 TiB drive)
> to minimize the effect of inner vs. outer tracks.
>
> That didn't seem to help either.
>

You can try running 'iostat -t -kx -d 1' on the osds, and see whether
%util reaches 100%, and when it happens whether it's due to number of
io operations that are thrashing, or whether it's due to high amount
of data.
FWIW, you may try setting  'filestore flusher = false', and set
/proc/sys/vm/dirty_background_ratio' to a small number (e.g., 1M).

Yehuda
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux