[PATCH] Makedumpfile: vmcore size estimate

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




(2014/07/02 17:13), bhe at redhat.com wrote:
> On 07/02/14 at 12:25am, Atsushi Kumagai wrote:
>>> On Thu, Jun 26, 2014 at 08:21:58AM +0000, Atsushi Kumagai wrote:
>
>>> I still don't understand that why makedumpfile can't provide an estimate
>>> * of that momement * reasonably.
>>
>> I think *estimate* is inappropriate to express this feature since it just
>> analyze the memory usage at that moment, I want to avoid a misunderstanding
>> that this feature is a prediction.
>
> I agree that understanding it as analyze memory usage is better.
>
>>
>>> I don't want to implement a separate utility for this as makedumpfile
>>> already has all the logic to go through pages, prepare bitmaps and figure
>>> out which ones will be dumped. It will be just duplication of code and
>>> waste of effort.
>>>
>>> We have a real problem at our hand. What do we tell customers that how
>>> big your dump partition should be. They have a multi tera byte machine. Do
>>> we tell them that create a multi tera byte dedicated dump partition.
>>> That's not practical at all.
>>>
>>> And asking them to guess is not reasonable either. makedumpfile can make
>>> much more educated guesses. It is not perfect but it is still much better
>>> than user making a wild guess.
>>
>> Well, fine. I have 2 requests for accepting this feature:
>>
>>    - Make this feature as simple as possible.
>>      I don't want to take time to maintain this, so I prefer the
>>      other idea which, Baoquan said, is like HP UX's feature.
>
> OK, I will try. In fact it's simple to just show the number of dumpable
> pages.
>
>>
>>    - Don't provide this feature as "vmcore size estimate", it just
>>      show the number of dumpable pages at the moment. Then please show
>>      the WARNING message to inform users about it.
>
> OK, good suggestion. Will do.
>

There are things that make users to guess actual vmcore size; not only compression but also additional meta-data, note data, header, bitmaps, of vmcores. I think it important to stress that the actual dump size should be larger than the number of pages displayed there so you (users) should care about that enough.

For fail safe, it should address ENOSPC case more. Sadly, preparing too small disks for vmcores is human error. In general, we cannot avoid this in real world. It's important to make vmcore valid even in case of ENOSPC in the sense that at least generated part of vmcore can correctly be analized by crash. In this direction, I previously sent the patch to create 1st bitmap first but this patch alone is still unsatisfactory to deal with the issue. It's necessary to flush the 2nd bitmap and the data left in caches too.

>>
>> Could you remake your patch, Baoquan?
>>

-- 
Thanks.
HATAYAMA, Daisuke




[Index of Archives]     [LM Sensors]     [Linux Sound]     [ALSA Users]     [ALSA Devel]     [Linux Audio Users]     [Linux Media]     [Kernel]     [Gimp]     [Yosemite News]     [Linux Media]

  Powered by Linux