Re: Slow read on RBD mount, Hammer 0.94.5

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Nick and Udo,

thanks, very helpful, I tweaked some of the config parameters along the line Udo suggests, but still only some 80 MB/s or so.

Kernel 4.3.4 running on the client machine and comfortable readahead configured

$ sudo blockdev --getra /dev/rbd0
262144

Still not more than about 80-90 MB/s.

For writing the parallelization is amazing and I see very impressive speeds, but why is reading performance so much behind? Why is it not parallelized the same way writing is? Is this something coming up in the jewel release? Or is it planned further down the road?

Please let me know if there is a way to enable clients better single threaded read performance for large files.

Thanks and regards,

Mike

On 4/20/16 10:43 PM, Nick Fisk wrote:


-----Original Message-----
From: ceph-users [mailto:ceph-users-bounces@xxxxxxxxxxxxxx] On Behalf Of
Udo Lembke
Sent: 20 April 2016 07:21
To: ceph-users@xxxxxxxxxxxxxx
Subject: Re:  Slow read on RBD mount, Hammer 0.94.5

Hi Mike,
I don't have experiences with RBD mounts, but see the same effect with
RBD.

You can do some tuning to get better results (disable debug and so on).

As hint some values from a ceph.conf:
[osd]
     debug asok = 0/0
     debug auth = 0/0
     debug buffer = 0/0
     debug client = 0/0
     debug context = 0/0
     debug crush = 0/0
     debug filer = 0/0
     debug filestore = 0/0
     debug finisher = 0/0
     debug heartbeatmap = 0/0
     debug journal = 0/0
     debug journaler = 0/0
     debug lockdep = 0/0
     debug mds = 0/0
     debug mds balancer = 0/0
     debug mds locker = 0/0
     debug mds log = 0/0
     debug mds log expire = 0/0
     debug mds migrator = 0/0
     debug mon = 0/0
     debug monc = 0/0
     debug ms = 0/0
     debug objclass = 0/0
     debug objectcacher = 0/0
     debug objecter = 0/0
     debug optracker = 0/0
     debug osd = 0/0
     debug paxos = 0/0
     debug perfcounter = 0/0
     debug rados = 0/0
     debug rbd = 0/0
     debug rgw = 0/0
     debug throttle = 0/0
     debug timer = 0/0
     debug tp = 0/0
     filestore_op_threads = 4
     osd max backfills = 1
     osd mount options xfs =
"rw,noatime,inode64,logbufs=8,logbsize=256k,allocsize=4M"
     osd mkfs options xfs = "-f -i size=2048"
     osd recovery max active = 1
     osd_disk_thread_ioprio_class = idle
     osd_disk_thread_ioprio_priority = 7
     osd_disk_threads = 1
     osd_enable_op_tracker = false
     osd_op_num_shards = 10
     osd_op_num_threads_per_shard = 1
     osd_op_threads = 4

Udo

On 19.04.2016 11:21, Mike Miller wrote:
Hi,

RBD mount
ceph v0.94.5
6 OSD with 9 HDD each
10 GBit/s public and private networks
3 MON nodes 1Gbit/s network

A rbd mounted with btrfs filesystem format performs really badly when
reading. Tried readahead in all combinations but that does not help in
any way.

Write rates are very good in excess of 600 MB/s up to 1200 MB/s,
average 800 MB/s Read rates on the same mounted rbd are about 10-30
MB/s !?

What kernel are you running, older kernels had an issue where readahead was
capped at 2MB. In order to get good read speeds you need readahead set to
about 32MB+.



Of course, both writes and reads are from a single client machine with
a single write/read command. So I am looking at single threaded
performance.
Actually, I was hoping to see at least 200-300 MB/s when reading, but
I am seeing 10% of that at best.

Thanks for your help.

Mike
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux