Hi Nick, With iSCSI we reach over 150 MB/s vmotion for single vm, 1 GB/s for 7-8 vm migrations. Since these are 64KB block sizes, latency/iops is a large factor, you need either controllers with write back cache or all flash . hdds without write cache will suffer even with external wal/db on ssds, giving around 80 MB/s vmotion migration. Potentially it may be possible to get higher vmotion speeds by using fancy striping but i would not recommend this unless your total queue depths in all your vms is small compared to the number of osds. Regarding thin provisioning, a vmdk provisioned as lazy zeroed does have an "initial" large impact on random write performance, could be up to 10x slower. If you are writing a random 64KB to an un-allocated vmfs block, vmfs will first write 1MB to fill the block with zeros then write the 64KB client data, so although a lot of data is being written the perceived client bandwidth is very low. The performance will gradually get better with time until the disk is fully provisioned. It is also possible to thick eager zero the vmdk disk at creation time. Again this is more apparent with random writes rather than sequential or vmotion load. Maged On 2018-06-29 18:48, Nick Fisk wrote:
|
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com