Re: osd deployment affects samba read performance

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, 11 Oct 2011, huang jun wrote:
> hi,all
> Recently, we test the samba read performance,and find something strange.
> there are two groups to test:
> the cluster include 1 MON and 1 MDS,and
> 1) 2 OSDs on the same machine
> 2) 2 OSDs on the different machine
> when we use samba to test read,
> in the first condition  it is about 110MB/s, but in the second
> condition it is only 60MB/s.
> 
> we anlysed it from the debug log /var/log/ceph/osd.*.log,
> in the condition 1), the OSD spends about 7~8ms (from sending
> osd_op_reply to getting the next client read request)
> but in 2) we spends about 12~14ms.

The osd_op_reply message shows up in the osd log when the message is 
queued.  Since these are reads, the messages are large (they carry the 
data payload), and will take more time to send over the wire when passing 
over the network.

It sounds like the problem is that only a single read is being issued at a 
time (i.e. readahead isn't working).  Are you using the kernel client or 
libceph/cfuse?  Readahead was only recently fixed in the kernel client.  
The patches are in the master branch of ceph-client.git, but are not yet 
upstream; they should be merged in 3.2-rc1 in the upcoming merge window.

sage
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux