Re: Ceph RBD performance - random writes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 09/08/12 23:42, Mark Nelson wrote:
On 8/8/12 10:54 PM, Mark Kirkwood wrote:
On 09/08/12 12:43, Mark Kirkwood wrote:




I tried out a raft of xfs config changes and also made the Ceph
journal really big (10G):

$ mkfs.xfs -f -l internal,size=1024m -d agcount=4 /dev/sd[b,c]2

+ mount options with nobarrier,logbufs=8

The results improved a little, but still very slow for small request
sizes...

Some more careful analysis showed that all the benefit derived from the
ceph storage reinit after the filesystem was remade, so going back
gradually to the default filesystem options (mkfs.xfs, default mount
with noatime, discard) and 2G journal results in the same numbers as I
posted with the tweaked settings.

So sorry, appears to be nothing gained (on this system anyway) from said
tweaking.

Regards

Mark

Hi Mark,

Would you mind installing and running collectl during your test? I think it's in the apt repositories now in 12.04.

Try "collectl -sD -oT --dskfilt sd<N>,sd<M>" where the dskfilt options are the devices for your OSD(s). I'd like to see what the device wait and svc times are like on your setup in both cases.


Ok, yeah it is in the 12.04 repo - will do.

There could well be an additional factor connected with xfs and lots of files on these Intel 520s - I have just had a conversation with a workmate who switched xfs to ext4 due to this. I will see if ext4 or btrfs (scary) do any better on these drives...

Cheers

Mark
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux