Re: [PATCH 0/3] Early use of boot service memory

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Nov 15, 2013 at 09:40:49AM -0800, H. Peter Anvin wrote:
> On 11/15/2013 09:33 AM, Yinghai Lu wrote:
> > 
> > If the system support intel IOMMU, we only need to that 72M for SWIOTLB
> > or AMD workaround.
> > If the user really care that for intel iommu enable system, they could use
> > "crashkernel=0,low" to have that 72M back.
> > 
> > and that 72M is under 4G instead of 896M.
> > 
> > so reserve 72M is not better than reserve 128M?
> > 
> 
> Those 72M are in addition to 128M, which does add up quite a bit.
> However, the presence of a working IOMMU in the system is something that
> should be possible to know at setup time.
> 

And IOMMU support is very flaky with kdump. And IOMMU's can be turned
off at command line. And that would force one to remove crahkernel_low=0.
So change of one command line option forces change of another. It is
complicated.

Also there are very few systems which work with IOMMU on. A lot more
which work without IOMMU. We have all these DMAR issues and still nobody
has been able to address IOMMU issues properly.

> Now, this was discussed partly in the context of VMs.  I want to say, as
> I have again and again: the right way to dump a VM is with hypervisor
> assistance rather than an in-image dumper which is both expensive and
> may be corrupted by the failure.

I agree taking assistance of hypervisor should be useful.

One reason we use kdump for VM too because it makes life simple. There
is no difference in how we configure, start and manage crash dumps
in baremetal or inside VM. And in practice have not heard of lot of
failures of kdump in VM environment.

So while reliability remains a theoritical concern, in practice it
has not been a real concern and that's one reason I think we have
not seen a major push for alternative method in VM environment.

> 
> It would be good if the various VMs with interest in Linux would agree
> on a mechanism for launching a dumper.  This can be done either inband
> (on the execution of a specific hypercall, the hypervisor terminates I/O
> to the guest, inserts a dumper into the address space and launches it)
> or out-of-band (the hypervisor itself, or an assistant program, writes a
> dump file) or as a hybrid (a new dump guest is launched with the
> hypervisor-written or hypervisor-preserved crashed guest image somehow
> passed to it.)

virsh can take dumps of KVM guest, so hypervisor calling out to an
assistant program might help here.

Anyway, we will gladly use any new dump mechanism for VM once things
start working seamlessly. So till all this materializes, forcing user
to reserve that extra 72M concerns me (both in bare-metal and virtualized
environments).

Thanks
Vivek
--
To unsubscribe from this list: send the line "unsubscribe linux-doc" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [Kernel Newbies]     [Security]     [Netfilter]     [Bugtraq]     [Linux FS]     [Yosemite Forum]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Samba]     [Video 4 Linux]     [Device Mapper]     [Linux Resources]

  Powered by Linux