I have 3 monitor VMs. Two of them are running on two different blades in the same chassis, but their networking is on different fabrics. The third one is on a blade in a different chassis.
My monitor VM cpu, memory and disk io load is very small, as in nearly idle. The VM images are on local 10k disks on the blade. They share the disks with a few other low IO VMs.
I've read that the monitors can get busy and need a lot of IO, where it justifies using SSDs. I imagine those must be very large clusters with at least hundreds of OSDs.
Jake
On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator <goetz.reinicke@xxxxxxxxxxxxxxx> wrote:
Hi Jake,
we have the fabric interconnects.
MONs as VM? What setup do you have? and what cluster size?
Regards . Götz
Am 13.05.15 um 15:20 schrieb Jake Young:
> I run my mons as VMs inside of UCS blade compute nodes.
>
> Do you use the fabric interconnects or the standalone blade chassis?
>
> Jake
>
> On Wednesday, May 13, 2015, Götz Reinicke - IT Koordinator
> <goetz.reinicke@xxxxxxxxxxxxxxx <mailto:goetz.reinicke@xxxxxxxxxxxxxxx>>
> wrote:
>
> Hi Christian,
>
> currently we do get good discounts as an University and the bundles were
> worth it.
>
> The chassis do have multiple PSUs and n 10Gb Ports (40Gb is possible).
> The switch connection is redundant.
>
> Cuurrently we think of 10 SATA OSD nodes + x SSD Cache Pool Nodes and 5
> MONs. For a start.
>
> The main focus with the blaids would be spacesaving in the rack. Till
> now I dont have any prize, but that woucld count to in our decision :)
>
> Thanks and regards . Götz
>
<...>
--
Götz Reinicke
IT-Koordinator
Tel. +49 7141 969 82 420
E-Mail goetz.reinicke@xxxxxxxxxxxxxxx
Filmakademie Baden-Württemberg GmbH
Akademiehof 10
71638 Ludwigsburg
www.filmakademie.de
Eintragung Amtsgericht Stuttgart HRB 205016
Vorsitzender des Aufsichtsrats: Jürgen Walter MdL
Staatssekretär im Ministerium für Wissenschaft,
Forschung und Kunst Baden-Württemberg
Geschäftsführer: Prof. Thomas Schadt
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com