changes. Once it is confirmed there is a solution with the 64bit kernel we just need a small patch to boot.txt and a few tweaks to /sbin/kexec to handle a 64bit bzImage. >> I don't buy the argument that there is a direct connection between >> the amount of memory you have and how much memory it takes to dump it. >> Even an indirect connections seems suspicious. > > Memory requirement by user space might be of interest though like dump > filtering tools. I vaguely remember that it used to first traverse all > the memory pages, create some internal data structures and then start > dumping. > > So memory required by filtering tool might be directly proportional to > amount of memory present in the system. Assuming your dump filtering tool creates a bitmap of pages to be dumped you get a ration of 32K to 1. Or 3MB for 100GB and 32MB for 1TB. Which is noticeable in the worst case but definitely not enough to push us past 2GB. > Vitaly, have you really run into cases where 2G upper limit is a concern. > What is the configuration you have, how much memory it has and how much > memory are you planning to reserve for kdump kernel? A good question. Eric