FreeBSD on RBD (KVM)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



We've been running some tests to try to determine why our FreeBSD VMs
are performing much worse than our Linux VMs backed by RBD, especially
on writes.

Our current deployment is:
- 4x KVM Hypervisors (QEMU 2.0.0+dfsg-2ubuntu1.6)
- 2x OSD nodes (8x SSDs each, 10Gbit links to hypervisors, pool has 2x
replication across nodes)
- Hypervisors have "rbd_cache enabled"
- All VMs use "cache=none" currently.

In testing we were getting ~30MB/s writes, and ~100MB/s reads on
FreeBSD 10.1.  On Linux VMs we're seeing ~150+MB/s for writes and
reads (dd if=/dev/zero of=output bs=1M count=1024 oflag=direct).

I tested several configurations on both RBD and local SSDs, and the
only time FreeBSD performance was comparable to Linux was with the
following configuration:
- Local SSD
- Qemu cache=writeback
- GPT journaling enabled

We did see some performance improvement (~50MB/s writes instead of
30MB/s) when using cache=writeback on RBD.

I've read several threads regarding cache=none vs cache=writeback.
cache=none is apparently safer for live migration, but cache=writeback
is recommended by Ceph to prevent data loss.  Apparently there was a
patch submitted for Qemu a few months ago to make cache=writeback
safer for live migrations as well: http://tracker.ceph.com/issues/2467

Has anyone been successful in getting good performance out of FreeBSD
on RBD?  Is there anything I'm just not thinking of here?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux