Re: Ceph on just two nodes being clients - reasonable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 19, 2011 at 09:55:27AM -0800, Colin McCabe wrote:
> If you knew what the maximum memory consumption for the daemons would
> be, you could use mlock to lock all those pages into memory (make them
> unswappable.) Then you could use rlimit to ensure that if the daemon
> ever tried to allocate more than that, it would be killed.

The classic nfs loopback mount deadlock is less about how much memory
the daemons are grabbing via malloc etc, and more about the buffer
cache management in kernel.

With a "loopback ceph", pressure from activity on the kernel ceph
client mountpoint might interact badly with the buffer cache the OSD
needs to work well, whether the OSD userspace tries to limit itself or
not.

It's one of those "it'll work until you have a bad day" things.

http://www.webservertalk.com/archive242-2007-10-2051163.html

https://bugzilla.redhat.com/show_bug.cgi?id=489889

http://lkml.org/lkml/2006/12/14/448

http://docs.google.com/viewer?a=v&q=cache:ONtIKJFSC7QJ:https://tao.truststc.org/Members/hweather/advanced_storage/Public%2520resources/network/nfs_user+nfs+loopback+deadlock+linux&hl=en&gl=us&pid=bl&srcid=ADGEESgpaVYYNoh2pmvPVQ9I_bpLLcoF3GJIMKavomIHNgTb-cbii6RVtWg28poJKdHBqQgKGXzVA2NOsC25FtWMP3yywTfNkX9N26IrKVIcVA9eRz6ZGBx1_Ur0JerUrfBQlPcmcBBz&sig=AHIEtbSjGX_hCVny345iFSq7WKBvxNZmIw
(slide 5)

-- 
:(){ :|:&};:
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux