On 20/7/19 11:53 pm, Marc Roos wrote: > Reverting back to filestore is quite a lot of work and time again. Maybe > see first if with some tuning of the vms you can get better results? None of the VMs are particularly disk-intensive. There's two users accessing the system over a WiFi network for email, and some HTTP/SMTP traffic coming in via an ADSL2 Internet connection. If Bluestore can't manage this, then I'd consider it totally worthless in any enterprise installation -- so clearly something is wrong. > What you also can try is for io intensive vm's add an ssd pool? How well does that work in a cluster with 0 SSD-based OSDs? For 3 of the nodes, the cases I'm using for the servers can fit two 2.5" drives. I have one 120GB SSD for the OS, that leaves one space spare for the OSD. These machines originally had 1TB 5400RPM HDDs fitted (slower ones than the current drives), and in the beginning I just had these 3 nodes. 3TB raw space was getting tight. I since added two new nodes, which are Intel NUCs with m.2 SATA SSDs for the OS and like the other nodes have a single 2.5" drive bay. This is being done as a hobby and a learning exercise I might add -- so while I have spent a lot of money on this, the funds I have to throw at this are not infinite. > I moved > some exchange servers on them. Tuned down the logging, because that is > writing constantly to disk. > With such setup you are at least secured for the future. The VMs I have are mostly Linux (Gentoo, some Debian/Ubuntu), with a few OpenBSD VMs for things like routers between virtual networks. -- Stuart Longland (aka Redhatter, VK4MSL) I haven't lost my mind... ...it's backed up on a tape somewhere. _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com