Re: 3.9.0: general protection fault

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/10/13 5:19 AM, Bernd Schubert wrote:
> On 05/09/2013 02:41 AM, Dave Chinner wrote:
>> On Wed, May 08, 2013 at 07:48:04PM +0200, Bernd Schubert wrote:
>>> On 05/08/2013 12:07 AM, Dave Chinner wrote:
>>>> On Tue, May 07, 2013 at 01:18:13PM +0200, Bernd Schubert wrote:
>>>>> On 05/07/2013 03:12 AM, Dave Chinner wrote:
>>>>>> On Mon, May 06, 2013 at 02:47:31PM +0200, Bernd Schubert wrote:
>>>>>>> On 05/06/2013 02:28 PM, Dave Chinner wrote:
>>>>>>>> On Mon, May 06, 2013 at 10:14:22AM +0200, Bernd Schubert wrote:
>>>>>>>>> And anpther protection fault, this time with 3.9.0. Always happens
>>>>>>>>> on one of the servers. Its ECC memory, so I don't suspect a faulty
>>>>>>>>> memory bank. Going to fsck now-
>>>>>>>>
>>>>>>>> http://xfs.org/index.php/XFS_FAQ#Q:_What_information_should_I_include_when_reporting_a_problem.3F
>>>>>>>
>>>>>>> Isn't that a bit overhead? And I can't provide /proc/meminfo and
>>>>>>> others, as this issue causes a kernel panic a few traces later.
>>>>>>
>>>>>> Provide what information you can.  Without knowing a single thing
>>>>>> about your hardware, storage config and workload, I can't help you
>>>>>> at all. You're asking me to find a needle in a haystack blindfolded
>>>>>> and with both hands tied behind my back....
>>>>>
>>>>> I see that xfs_info, meminfo, etc are useful, but /proc/mounts?
>>>>> Maybe you want "cat /proc/mounts | grep xfs"?. Attached is the
>>>>> output of /proc/mounts, please let me know if you were really
>>>>> interested in all of that non-xfs output?
>>>>
>>>> Yes. You never know what is relevant to a problem that is reported,
>>>> especially if there are multiple filesystems sharing the same
>>>> device...
>>>
>>> Hmm, I see. But you need to extend your questions to multipathing
>>> and shared storage.

If you'd like to add that to the wiki it'd be great.

>> why would we? Anyone using such a configuration reporting a bug
>> usually is clueful enough to mention it in their bug report when
>> describing their RAID/LVM setup.  The FAQ entry covers the basic
>> information needed to start meaingful triage, not *all* the
>> infomration we might ask for. It's the baseline we start from.
>>
>> Indeed, the FAQ exists because I got sick of asking people for the
>> same information several times a week, every week in response to
>> poor bug reports like yours. it's far more efficient to paste a link
>> several times a week.  i.e. The FAQ entry is there for my benefit,
>> not yours.
> 
> Poor bug report or not, most information you ask about in the FAQ are entirely irrelevant for this issue.

If I had a dollar for every time a bug reporter left out "irrelevant"
information that turned out to be critical, I might be retired by now.  :)

If a few developers on the list are going to scale to supporting every
user with a problem, we need to share the effort efficiently, and that
means putting just a bit more burden on the reporter, to cut down
on the back and forth cycles of trying to gather more information.

If anyone wants a quick & useful project, perhaps a script which gathers
all requested info in the faq would be a step in the right direction.

-Eric

_______________________________________________
xfs mailing list
xfs@xxxxxxxxxxx
http://oss.sgi.com/mailman/listinfo/xfs




[Index of Archives]     [Linux XFS Devel]     [Linux Filesystem Development]     [Filesystem Testing]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]

  Powered by Linux