Fatigue for XFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 05/06/2014 01:23 AM, Dave Chinner wrote:
> On Tue, May 06, 2014 at 12:59:27AM +0400, Andrey Korolyov wrote:
>> On Tue, May 6, 2014 at 12:36 AM, Dave Chinner <david at fromorbit.com> wrote:
>>> On Mon, May 05, 2014 at 11:49:05PM +0400, Andrey Korolyov wrote:
>>>> Hello,
>>>>
>>>> We are currently exploring issue which can be related to Ceph itself
>>>> or to the XFS - any help is very appreciated.
>>>>
>>>> First, the picture: relatively old cluster w/ two years uptime and ten
>>>> months after fs recreation on every OSD, one of daemons started to
>>>> flap approximately once per day for couple of weeks, with no external
>>>> reason (bandwidth/IOPS/host issues). It looks almost the same every
>>>> time - OSD suddenly stop serving requests for a short period, gets
>>>> kicked out by peers report, then returns in a couple of seconds. Of
>>>> course, small but sensitive amount of requests are delayed by 15-30
>>>> seconds twice, which is bad for us. The only thing which correlates
>>>> with this kick is a peak of I/O, not too large, even not consuming all
>>>> underlying disk utilization, but alone in the cluster and clearly
>>>> visible. Also there are at least two occasions *without* correlated
>>>> iowait peak.
>>>
>>> So, actual numbers and traces are the only thing that tell us what
>>> is happening during these events. See here:
>>>
>>> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>>>
>>> If it happens at almost the same time every day, then I'd be looking
>>> at the crontabs to find what starts up about that time. output of
>>> top will also probably tell you what process is running, too. topio
>>> might be instructive, and blktrace almost certainly will be....
>>>
>>>> I have two versions - we`re touching some sector on disk which is
>>>> about to be marked as dead but not displayed in SMART statistics or (I
>>>
>>> Doubt it - SMART doesn't cause OS visible IO dispatch spikes.
>>>
>>>> believe so) some kind of XFS fatigue, which can be more likely in this
>>>> case, since near-bad sector should be touched more frequently and
>>>> related impact could leave traces in dmesg/SMART from my experience. I
>>>
>>> I doubt that, too, because XFS doesn't have anything that is
>>> triggered on a daily basis inside it. Maybe you've got xfs_fsr set
>>> up on a cron job, though...
>>>
>>>> would like to ask if anyone has a simular experience before or can
>>>> suggest to poke existing file system in some way. If no suggestion
>>>> appear, I`ll probably reformat disk and, if problem will remain after
>>>> refill, replace it, but I think less destructive actions can be done
>>>> before.
>>>
>>> Yeah, monitoring and determining the process that is issuing the IO
>>> is what you need to find first.
>>>
>>> Cheers,
>>>
>>> Dave.
>>> --
>>> Dave Chinner
>>> david at fromorbit.com
>>
>> Thanks Dave,
>>
>> there are definitely no cron set for specific time (though most of
>> lockups happened in a relatively small interval which correlates with
>> the Ceph snapshot operations).
> 
> OK.
> 
> FWIW, Ceph snapshots on XFS may not be immediately costly in terms
> of IO - they can be extremely costly after one is taken when the
> files in the snapshot are next written to. If you are snapshotting
> files that are currently being written to, then that's likely to
> cause immediate IO issues...
> 
>> In at least one case no Ceph snapshot
>> operations (including delayed removal) happened and at least two when
>> no I/O peak was observed. We observed and eliminated weird lockups
>> related to the openswitch behavior before - we`re combining storage
>> and compute nodes, so quirks in the OVS datapath caused very
>> interesting and weird system-wide lockups on (supposedly) spinlock,
>> and we see 'pure' Ceph lockups on XFS at time with 3.4-3.7 kernels,
>> all of them was correlated with very high context switch peak.
> 
> Until we determine what is triggering the IO, the application isn't
> really a concern.
> 
>> Current issue is seemingly nothing to do with spinlock-like bugs or
>> just a hardware problem, we even rebooted problematic node to check if
>> the memory allocator may stuck at the border of specific NUMA node,
>> with no help, but first reappearance of this bug was delayed by some
>> days then. Disabling lazy allocation via specifying allocsize did
>> nothing too. It may look like I am insisting that it is XFS bug, where
>> Ceph version is more likely to appear because of way more complicated
>> logic and operation behaviour, but persistence on specific node across
>> relaunching of Ceph storage daemon suggests bug relation to the
>> unlucky byte sequence more than anything else. If it finally appear as
>> Ceph bug, it`ll ruin our expectations from two-year of close
>> experience with this product and if it is XFS bug, we haven`t see
>> anything like this before, thought we had a pretty collection of
>> XFS-related lockups on the earlier kernels.
> 
> Long experience with triaging storage performance issues has taught
> me to ignore what anyone *thinks* is the cause of the problem; I
> rely on the data that is gathered to tell me what the problem is. I
> find that hard data has a nasty habit of busting assumptions,
> expectations, speculations and hypothesis... :)
> 
>> If it helps, I
>> may collect a series of perf events during next appearance or exact
>> iostat output (mine graphics can say that the I/O was not choked
>> completely when peak appeared, that`s all).
> 
> Before delving into perf events, we need to know what we are looking
> for. That's what things like iostat, vmstat, top, blktrace, etc will
> tell us - where to point the microscope.
> 
> Cheers,
> 
> Dave.
> 

Thanks,

after a long and adventurous investigation we found that the effect was
most probably caused by crossing tails of multiple background snapshot
deletion in Ceph, so this had nothing to do with XFS, though behavior
was very strange and because of very large time intervals we had not
able imagine correlation between those events earlier. Background
snapshot removal in Ceph contains kind of 'spike' at the of the process,
so if one does deletion of a couple of snapshots holding close amount of
commited bytes each, their removal will shot spike almost synchronously
at the end, causing one or more OSD daemons to choke.


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux