Re: Possible improvements for a slow write speed (excluding independent SSD journals)

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,
I'm currently benching full ssd setup (don't have finished yet),

but with 4osd, ssd intel s3500, (replication x1), with randwrite 4M, I'm around 550MB/S

with random 4K, i'm around 40000iops   (10000iops by osd, limit is the disk write o_dsync speed)

This is with hammer.



----- Mail original -----
De: "J-P Methot" <jpmethot@xxxxxxxxxx>
À: "ceph-users" <ceph-users@xxxxxxxxxxxxxx>
Envoyé: Lundi 20 Avril 2015 16:30:41
Objet:  Possible improvements for a slow write speed (excluding independent SSD journals)

Hi, 

This is similar to another thread running right now, but since our 
current setup is completely different from the one described in the 
other thread, I thought it may be better to start a new one. 

We are running Ceph Firefly 0.80.8 (soon to be upgraded to 0.80.9). We 
have 6 OSD hosts with 16 OSD each (so a total of 96 OSDs). Each OSD is a 
Samsung SSD 840 EVO on which I can reach write speeds of roughly 400 
MB/sec, plugged in jbod on a controller that can theoretically transfer 
at 6gb/sec. All of that is linked to openstack compute nodes on two 
bonded 10gbps links (so a max transfer rate of 20 gbps). 

When I run rados bench from the compute nodes, I reach the network cap 
in read speed. However, write speeds are vastly inferior, reaching about 
920 MB/sec. If I have 4 compute nodes running the write benchmark at the 
same time, I can see the number plummet to 350 MB/sec . For our planned 
usage, we find it to be rather slow, considering we will run a high 
number of virtual machines in there. 

Of course, the first thing to do would be to transfer the journal on 
faster drives. However, these are SSDs we're talking about. We don't 
really have access to faster drives. I must find a way to get better 
write speeds. Thus, I am looking for suggestions as to how to make it 
faster. 

I have also thought of options myself like: 
-Upgrading to the latest stable hammer version (would that really give 
me a big performance increase?) 
-Crush map modifications? (this is a long shot, but I'm still using the 
default crush map, maybe there's a change there I could make to improve 
performances) 

Any suggestions as to anything else I can tweak would be strongly 
appreciated. 

For reference, here's part of my ceph.conf: 

[global] 
auth_service_required = cephx 
filestore_xattr_use_omap = true 
auth_client_required = cephx 
auth_cluster_required = cephx 
osd pool default size = 3 


osd pg bits = 12 
osd pgp bits = 12 
osd pool default pg num = 800 
osd pool default pgp num = 800 

[client] 
rbd cache = true 
rbd cache writethrough until flush = true 

[osd] 
filestore_fd_cache_size = 1000000 
filestore_omap_header_cache_size = 1000000 
filestore_fd_cache_random = true 
filestore_queue_max_ops = 5000 
journal_queue_max_ops = 1000000 
max_open_files = 1000000 
osd journal size = 10000 

-- 
====================== 
Jean-Philippe Méthot 
Administrateur système / System administrator 
GloboTech Communications 
Phone: 1-514-907-0050 
Toll Free: 1-(888)-GTCOMM1 
Fax: 1-(514)-907-0750 
jpmethot@xxxxxxxxxx 
http://www.gtcomm.net 

_______________________________________________ 
ceph-users mailing list 
ceph-users@xxxxxxxxxxxxxx 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux