Re: running Qemu / Hypervisor AND Ceph on the same nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



A word of caution: While normally my OSDs use very little CPU, I have occasionally had an issue where the OSDs saturate the CPU (not necessarily during a rebuild). This might be a kernel thing, or a driver thing specific to our hosts, but were this to happen to you, it now impacts your VMs as well potentially. And even during a rebuild, but when things are acting normally, CPU usage goes up by a lot relative to steady-state for periods. On top of this, you would also be sharing other system resources which would be potential abuse vectors -- network for one. I would avoid.

On Thu, Mar 26, 2015 at 8:11 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
On 26-03-15 12:04, Stefan Priebe - Profihost AG wrote:
> Hi Wido,
> Am 26.03.2015 um 11:59 schrieb Wido den Hollander:
>> On 26-03-15 11:52, Stefan Priebe - Profihost AG wrote:
>>> Hi,
>>>
>>> in the past i rwad pretty often that it's not a good idea to run ceph
>>> and qemu / the hypervisors on the same nodes.
>>>
>>> But why is this a bad idea? You save space and can better use the
>>> ressources you have in the nodes anyway.
>>>
>>
>> Memory pressure during recovery *might* become a problem. If you make
>> sure that you don't allocate more then let's say 50% for the guests it
>> could work.
>
> mhm sure? I've never seen problems like that. Currently i ran each ceph
> node with 64GB of memory and each hypervisor node with around 512GB to
> 1TB RAM while having 48 cores.
>

Yes, it can happen. You have machines with enough memory, but if you
overprovision the machines it can happen.

>> Using cgroups you could also prevent that the OSDs eat up all memory or CPU.
> Never seen an OSD doing so crazy things.
>

Again, it really depends on the available memory and CPU. If you buy big
machines for this purpose it probably won't be a problem.

> Stefan
>
>> So technically it could work, but memorey and CPU pressure is something
>> which might give you problems.
>>
>>> Stefan
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@xxxxxxxxxxxxxx
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>
>>


--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



--
David Burley
NOC Manager, Sr. Systems Programmer/Analyst
Slashdot Media

e: david@xxxxxxxxxxxxxxxxx
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux