Re: Fatigue for XFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, May 6, 2014 at 12:36 AM, Dave Chinner <david@xxxxxxxxxxxxx> wrote:
> On Mon, May 05, 2014 at 11:49:05PM +0400, Andrey Korolyov wrote:
>> Hello,
>>
>> We are currently exploring issue which can be related to Ceph itself
>> or to the XFS - any help is very appreciated.
>>
>> First, the picture: relatively old cluster w/ two years uptime and ten
>> months after fs recreation on every OSD, one of daemons started to
>> flap approximately once per day for couple of weeks, with no external
>> reason (bandwidth/IOPS/host issues). It looks almost the same every
>> time - OSD suddenly stop serving requests for a short period, gets
>> kicked out by peers report, then returns in a couple of seconds. Of
>> course, small but sensitive amount of requests are delayed by 15-30
>> seconds twice, which is bad for us. The only thing which correlates
>> with this kick is a peak of I/O, not too large, even not consuming all
>> underlying disk utilization, but alone in the cluster and clearly
>> visible. Also there are at least two occasions *without* correlated
>> iowait peak.
>
> So, actual numbers and traces are the only thing that tell us what
> is happening during these events. See here:
>
> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>
> If it happens at almost the same time every day, then I'd be looking
> at the crontabs to find what starts up about that time. output of
> top will also probably tell you what process is running, too. topio
> might be instructive, and blktrace almost certainly will be....
>
>> I have two versions - we`re touching some sector on disk which is
>> about to be marked as dead but not displayed in SMART statistics or (I
>
> Doubt it - SMART doesn't cause OS visible IO dispatch spikes.
>
>> believe so) some kind of XFS fatigue, which can be more likely in this
>> case, since near-bad sector should be touched more frequently and
>> related impact could leave traces in dmesg/SMART from my experience. I
>
> I doubt that, too, because XFS doesn't have anything that is
> triggered on a daily basis inside it. Maybe you've got xfs_fsr set
> up on a cron job, though...
>
>> would like to ask if anyone has a simular experience before or can
>> suggest to poke existing file system in some way. If no suggestion
>> appear, I`ll probably reformat disk and, if problem will remain after
>> refill, replace it, but I think less destructive actions can be done
>> before.
>
> Yeah, monitoring and determining the process that is issuing the IO
> is what you need to find first.
>
> Cheers,
>
> Dave.
> --
> Dave Chinner
> david@xxxxxxxxxxxxx

Thanks Dave,

there are definitely no cron set for specific time (though most of
lockups happened in a relatively small interval which correlates with
the Ceph snapshot operations). In at least one case no Ceph snapshot
operations (including delayed removal) happened and at least two when
no I/O peak was observed. We observed and eliminated weird lockups
related to the openswitch behavior before - we`re combining storage
and compute nodes, so quirks in the OVS datapath caused very
interesting and weird system-wide lockups on (supposedly) spinlock,
and we see 'pure' Ceph lockups on XFS at time with 3.4-3.7 kernels,
all of them was correlated with very high context switch peak.

Current issue is seemingly nothing to do with spinlock-like bugs or
just a hardware problem, we even rebooted problematic node to check if
the memory allocator may stuck at the border of specific NUMA node,
with no help, but first reappearance of this bug was delayed by some
days then. Disabling lazy allocation via specifying allocsize did
nothing too. It may look like I am insisting that it is XFS bug, where
Ceph version is more likely to appear because of way more complicated
logic and operation behaviour, but persistence on specific node across
relaunching of Ceph storage daemon suggests bug relation to the
unlucky byte sequence more than anything else. If it finally appear as
Ceph bug, it`ll ruin our expectations from two-year of close
experience with this product and if it is XFS bug, we haven`t see
anything like this before, thought we had a pretty collection of
XFS-related lockups on the earlier kernels.

So, my understanding is that we hitting neither very rare memory
allocator bug in case of XFS or age-related Ceph issue, both are very
unlikely to exist - but I cannot imagine nothing else. If it helps, I
may collect a series of perf events during next appearance or exact
iostat output (mine graphics can say that the I/O was not choked
completely when peak appeared, that`s all).

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux