Re: cgroups to prevent OSD from taking down a whole machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I've been thinking about using this for machines where people want to run OSDS and VMs on the same nodes. Keep Ceph and the VMs in separate cgroups to help keep them from interfering with each other.

It won't help with memory or QPI/hypertransport throughput (unless you have them segmented on different sockets), but it should help in some other cases.

Mark

On 02/08/2013 08:46 AM, Wido den Hollander wrote:
Hi,

Has anybody tried this yet?

Running into the memory leaks during scrubbing[0] I started thinking
about a way to limit OSDs to a specific amount of memory.

A machine has 32GB of memory, 4 OSDs, so you might want to limit each
OSD to 8GB so it can't take the whole machine down and would only kill
itself.

I think I'll give it a try on a couple of machines, but I just wanted to
see if anybody has tried this already or sees any downsides to this?

We use cgroups in the CloudStack project (through libvirt) to prevent
that a memory leak in one KVM proces can take down a whole hypervisor,
it works pretty well there.

Suggestions or comments?

Wido

[0]: http://tracker.ceph.com/issues/3883
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux