Re: slow read-performance inside the vm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

also check your cpu usage, dell poweredge 2900 are quite old (6-8 years old),

The more iops you need, to more cpu you need.

I don't remember what is the default blocksize of rados bench.


----- Mail original -----
De: "Patrik Plank" <patrik@xxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Jeudi 8 Janvier 2015 17:36:43
Objet:  slow read-performance inside the vm

slow read-performance inside the vm 





Hi, 

first of all, I am a “ceph-beginner“ so i am sorry for the trivial questions :). 

I have build a ceph three node cluster for virtualization. 



Hardware: 



Dell Poweredge 2900 

8 x 300GB SAS 15k7 with Dell Perc 6/i in Raid 0 

2 x 120GB SSD in Raid 1 with Fujitsu Raid Controller for Journal + OS 

16GB RAM 

2 x Intel Xeon E5410 2,3 Ghz 

2 x Dual 1Gb Nic 



Configuration 




Ceph 0.90 

2x Network bonding with 2 x 1 Gb Network (public + cluster Network) with mtu 9000 

read_ahead_kb = 2048 

/dev/sda1 on /var/lib/ceph/osd/ceph-0 type xfs (rw,noatime,attr2,inode64,noquota) 





ceph.conf: 




[global] 



fsid = 1afaa484-1e18-4498-8fab-a31c0be230dd 

mon_initial_members = ceph01 

mon_host = 10.0.0.20,10.0.0.21,10.0.0.22 

auth_cluster_required = cephx 

auth_service_required = cephx 

auth_client_required = cephx 

filestore_xattr_use_omap = true 

public_network = 10.0.0.0/24 

cluster_network = 10.0.1.0/24 

osd_pool_default_size = 3 

osd_pool_default_min_size = 1 

osd_pool_default_pg_num = 128 

osd_pool_default_pgp_num = 128 

filestore_flusher = false 




[client] 

rbd_cache = true 

rbd_readahead_trigger_requests = 50 

rbd_readahead_max_bytes = 4096 

rbd_readahead_disable_after_bytes = 0 







rados bench -p kvm 200 write –no-cleanup 



Total time run: 201.139795 

Total writes made: 3403 

Write size: 4194304 

Bandwidth (MB/sec): 67.674 

Stddev Bandwidth: 66.7865 

Max bandwidth (MB/sec): 212 

Min bandwidth (MB/sec): 0 

Average Latency: 0.945577 

Stddev Latency: 1.65121 

Max latency: 13.6154 

Min latency: 0.085628 



rados bench -p kvm 200 seq 



Total time run: 63.755990 

Total reads made: 3403 

Read size: 4194304 

Bandwidth (MB/sec): 213.502 

Average Latency: 0.299648 

Max latency: 1.00783 

Min latency: 0.057656 




So here my questions: 



With these values above, I get a write performance of 90Mb/s and read performance of 29Mb/s, inside the VM. (Windows 2008/R2 with virtio driver and writeback-cache enabled) 

Are these values normal with my configuration and hardware? -> The read-performance seems slow. 

Would the read-performance better if I run for every single disk a osd? 




Best regards 

Patrik 



_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux