Re: Ceph on just two nodes being clients - reasonable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 19, 2011 at 12:03 PM, Tommi Virtanen
<tommi.virtanen@xxxxxxxxxxxxx> wrote:
> On Wed, Jan 19, 2011 at 09:55:27AM -0800, Colin McCabe wrote:
>> If you knew what the maximum memory consumption for the daemons would
>> be, you could use mlock to lock all those pages into memory (make them
>> unswappable.) Then you could use rlimit to ensure that if the daemon
>> ever tried to allocate more than that, it would be killed.
>
> The classic nfs loopback mount deadlock is less about how much memory
> the daemons are grabbing via malloc etc, and more about the buffer
> cache management in kernel.

My understanding is that nfsd tries to allocate memory, which turns
out to be impossible because the page cache is occupying that memory,
and requires nfsd to drain.

I guess the question you are asking is whether nfsd just doing I/O
requires kernel memory that might not be available. I'm not entirely
sure about the answer to that. Unfortunately, none of those links has
any information on the subject (I had high hopes for the lkml one, but
it was about an unrelated race in NFS).

Colin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux