Re: Ceph on just two nodes being clients - reasonable?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 19, 2011 at 7:57 AM, Gregory Farnum <gregf@xxxxxxxxxxxxxxx> wrote:

> However, there is a serious issue with running clients and servers on
> one machine, which may or may not be a problem depending on your use
> case: Deadlock becomes a significant possibility. This isn't a problem
> we've come up with a good solution for, unfortunately, but imagine
> you're writing a lot of files to Ceph. Ceph dutifully writes them and
> the kernel dutifully caches them. You also have a lot of write
> activity so the Ceph kernel client is doing local caching. Then the
> kernel comes along and says "I'm low on memory! Flush stuff to disk!"
> and the kernel client tries to flush it out...which involves creating
> another copy of the data in memory on the same machine. Uh-oh!
> Now if you use the FUSE client this won't be an issue, but your
> performance also won't be so good. :/

If you knew what the maximum memory consumption for the daemons would
be, you could use mlock to lock all those pages into memory (make them
unswappable.) Then you could use rlimit to ensure that if the daemon
ever tried to allocate more than that, it would be killed.

That would prevent the scenario you outlined above where there is not
enough memory to flush the page cache. Of course, to do this, we would
need to reduce memory consumption and make it deterministic for this
to be feasible.

cheers,
Colin
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux