> > I upgraded to 0.60 and that seems to have made a big difference. If I kill off > > one of my OSD's I get around 20MB/second throughput in live testing (test > > restore of Xen Windows VM from USB backup), which is pretty much the > > limit of the USB disk. If I reactivate the second OSD throughput drops back to > > ~10MB/second which isn't as good but is much better than I was getting. > > > > Ah, are these disks both connected through USB(2?)? > I guess I was a bit brief :) Both my OSD disks are SATA attached. Inside a VM I have attached another disk which is attached to the host via USB. This disk contains a backup of a server (using Windows Server Backup) and am doing a test restore of it, with ceph holding the C: drive of the virtual server (eg the write target). What I was saying is that I would never expect more than about 20-30MB/s write speed in this test because that is going to be approximately the limit of the USB interface that the data is coming from. This is more a production test than a benchmark, and I was just iostat to monitor the throughput of the /dev/rbdX interfaces while doing the restore. James -- To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html