Poor Windows performance on ceph RBD.

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Dear all,

maybe someone can give me a pointer here. We are running OpenNebula with ceph RBD as a back-end store. We have a pool of spinning disks to create large low-demand data disks, mainly for backups and other cold storage. Everything is fine when using linux VMs. However, Windows VMs perform poorly, they are like a factor 20 slower than a similarly created linux VM.

If anyone has pointers what to look for, we would be very grateful.

The OpenNebula installation is more or less default. The current OS and libvirt versions we use are:

Centos 7.6 with stock kernel 3.10.0-1062.1.1.el7.x86_64
libvirt-client.x86_64                      4.5.0-23.el7_7.1            @updates 
qemu-kvm-ev.x86_64                         10:2.12.0-33.1.el7          @centos-qemu-ev

Some benchmark results from good to worse workloads:

rbd bench --io-size 4M --io-total 4G --io-pattern seq --io-type write --io-threads 16 : 450MB/s
rbd bench --io-size 4M --io-total 4G --io-pattern seq --io-type write --io-threads 1  : 230MB/s
rbd bench --io-size 1M --io-total 4G --io-pattern seq --io-type write --io-threads 1  : 190MB/s
rbd bench --io-size 64K --io-total 4G --io-pattern seq --io-type write --io-threads 1  : 150MB/s
rbd bench --io-size 64K --io-total 1G --io-pattern rand --io-type write --io-threads 1 : 26MB/s

dd with conv=fdatasync gives awesome 500MB/s inside linux VM for sequential write of 4GB.

We copied a couple of large ISO files inside the Windows VM and for the first ca. 1 to 1.5G it performs as expected. Thereafter, however, write speed drops rapidly to ca. 25MB/s and does not recover. It is almost as if Windows translates large sequential writes to small random writes.

If anyone has seen and solved this before, please let us know.

Thanks and best regards,

=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux