Andi Kleen <andi at firstfloor.org> writes: >> As an initial approximation I would use a 32nd of low memory. > > That means a 1TB machine will have a 32GB crash kernel. > > Surely that's excessive?!? > > It would be repeating all the same mistakes people made with hash tables > several years ago. > >> >> That can be written to (with enough privileges when no crash kernel is >> loaded) reduce the amount of memory reserved by the crash kernel. >> >> Bernhard does that sound useful to you? >> >> Amerigo does that seem reasonable? > > It doesn't sound reasonable to Andi. > > Why do you even want to grow the crash kernel that much? Is there > any real problem with a 64-128MB crash kernel? Because it is absolutely ridiculous in size and user space will have to take up the work of trimming back down to something reasonable in the init script. At a practical level crash dump userlands do things like fsck filesystems before they mount them. For truly large machines there was a desire to parallelize core dump writing to different disks. I don't know if that has been implemented yet, but in that case you certainly more ram for buffers tends to be useful. I think if we are going to go beyond having a magic boot command line (that we have today) that parametrizes the amount of memory to reserve based on how much memory we have in the system. We need to put user space in control. We can only put user space in control if we initially reserve too much and let it release the memory it won't use. That would allow removing magic from installers and leaving it to installed packages. Which seems a lot more maintainable. Eric