Slow performance into windows VM

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello, guys

I to face a task poor performance into windows 2k12r2 instance running on rbd (openstack cluster). RBD disk have a size 17Tb. My ceph cluster consist from:

- 3 monitors nodes (Celeron G530/6Gb RAM, DualCore E6500/2Gb RAM, Core2Duo E7500/2Gb RAM). Each node have 1Gbit network to frontend subnet od Ceph cluster
- 2 block nodes (Xeon E5620/32Gb RAM/2*1Gbit NIC). Each node have 2*500Gb HDD for operation system and 9*3Tb SATA HDD (WD SE). Total 18 OSD daemons on 2 nodes. Journals placed on same HDD as a rados data. I know that better using for those purpose separate SSD disk.

When I test new windows instance performance was good (read/write something about 100Mb/sec). But after I copied 16Tb data to windows instance read performance has down to 10Mb/sec. Type of data on VM - image and video.

ceph.conf on client side:

[global]
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
filestore xattr use omap = true
filestore max sync interval = 10
filestore queue max ops = 3000
filestore queue commiting max bytes = 1048576000
filestore queue commiting max ops = 5000
filestore queue max bytes = 1048576000
filestore queue committing max ops = 4096
filestore queue committing max bytes = 16 MiB
filestore op threads = 20
filestore flusher = false
filestore journal parallel = false
filestore journal writeahead = true
journal dio = true
journal aio = true
journal force aio = true
journal block align = true
journal max write bytes = 1048576000
journal_discard = true
osd pool default size = 2 # Write an object n times.
osd pool default min size = 1
osd pool default pg num = 333
osd pool default pgp num = 333
osd crush chooseleaf type = 1

[client]
rbd cache = true
rbd cache size = 67108864
rbd cache max dirty = 50331648
rbd cache target dirty = 33554432
rbd cache max dirty age = 2
rbd cache writethrough until flush = true


rados bench show from block node show:
rados bench -p scbench 120 write --no-cleanup

Total time run: 120.399337
Total writes made: 3538
Write size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 117.542
Stddev Bandwidth: 9.31244
Max bandwidth (MB/sec): 148
Min bandwidth (MB/sec): 92
Average IOPS: 29
Stddev IOPS: 2
Max IOPS: 37
Min IOPS: 23
Average Latency(s): 0.544365
Stddev Latency(s): 0.35825
Max latency(s): 5.42548
Min latency(s): 0.101533


rados bench -p scbench 120 seq

Total time run: 120.880920
Total reads made: 1932
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 63.9307
Average IOPS 15
Stddev IOPS: 3
Max IOPS: 25
Min IOPS: 5
Average Latency(s): 0.999095
Max latency(s): 8.50774
Min latency(s): 0.0391591

rados bench -p scbench 120 rand

Total time run: 121.059005
Total reads made: 1920
Read size: 4194304
Object size: 4194304
Bandwidth (MB/sec): 63.4401
Average IOPS: 15
Stddev IOPS: 4
Max IOPS: 26
Min IOPS: 1
Average Latency(s): 1.00785
Max latency(s): 6.48138
Min latency(s): 0.038925

On XFS partitions fragmentation no more than 1%

On libvirt disk connected so:

<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writeback'/>
<auth username='cinder'>
<secret type='ceph' uuid='*********'/>
</auth>
<source protocol='rbd' name='os-volumes/volume-4680524c-2c10-47a3-af59-2e1bd12a7ce4'>
<host name='C.C.C.C' port='6789'/>
<host name='B.B.B.B' port='6789'/>
<host name='A.A.A.A' port='6789'/>
</source>
<target dev='vdb' bus='virtio'/>
<serial>4680524c-2c10-47a3-af59-2e1bd12a7ce4</serial>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
</disk>

Do anybody some idea?




----


Konstantin

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux