Hi, On wo, 2013-08-28 at 10:10 +0200, Sylvain Munaut wrote: > Hi, > > > I am doing some experimentation with Ceph and Xen (on the same host) and I'm > > experiencing some problems with the rbd device that I'm using as the block > > device. My environment is: > > AFAIK running the OSD and the kernel driver inside the same host is > not recommended at all ... I can confirm that this seems to be the case, with RBD as well as CephFS. As soon as you use the kernel-based driver to access them on the same node as where the OSD data-dirs are mounted for that cluster, things slow to a crawl. For CephFS, using the Fuse-driver solves that problem. For RBD, you want to avoid going through the kernel-based block-driver. There is a technical reason for this, but I'll leave that for others to explain. > > Personally I run storage node and compute node on the same physical > hosts, but the OSD is isolated into a DomU to avoid the possible > 'paging / memory' issue of running rbd.ko and osd on the same host. > (here they have pinned cpu and memory, so no interactions). I, too, run compute and shared storage on the same physical hardware on some clusters. Works for me, even under reasonably high loading. I use RBD directly from KVM, which avoids the host-OS kernel and therefore your problem altogether. This also avoids runaway-conditions on the nodes themselves under unexpected overload or failure of the shared storage, with the VMs usually just blocking until the problem is cleared and then continuing without any I/O-errors, with the host-OS remaining workable too. Regards, Oliver _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com