Re: cgroups to prevent OSD from taking down a whole machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, Feb 8, 2013 at 6:46 PM, Wido den Hollander <wido@xxxxxxxxx> wrote:
> Hi,
>
> Has anybody tried this yet?
>
> Running into the memory leaks during scrubbing[0] I started thinking about a
> way to limit OSDs to a specific amount of memory.
>
> A machine has 32GB of memory, 4 OSDs, so you might want to limit each OSD to
> 8GB so it can't take the whole machine down and would only kill itself.
>
> I think I'll give it a try on a couple of machines, but I just wanted to see
> if anybody has tried this already or sees any downsides to this?
>

Yep, it works fine, although I`d recommend to take a look to an
oom-delay patch to handle oom situations inside cg more nicely. And of
course you`ll pay for memory cg usage by some percents of overall node
performance.

> We use cgroups in the CloudStack project (through libvirt) to prevent that a
> memory leak in one KVM proces can take down a whole hypervisor, it works
> pretty well there.
>
> Suggestions or comments?
>
> Wido
>
> [0]: http://tracker.ceph.com/issues/3883
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majordomo@xxxxxxxxxxxxxxx
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux