Re: Impact of fancy striping

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Did you try moving the journals to separate SSDs?

It was recently discovered that due to a kernel bug/design, the journal writes are translated into device cache flush commands, so thinking about that I wonder also whether there would be performance improvement in the case that journal and OSD are on the same physical drive implementing the workaround, since currently the system is presumably hitting spindle latency for every write?

On 2013-11-29 12:46, nicolasc wrote:
Hi every one,

I am currently testing a use-case with large rbd images (several TB),
each containing an XFS filesystem, which I mount on local clients. I
have been testing the throughput writing on a single file in the XFS
mount, using "dd oflag=direct", for various block sizes.

With a default config, the "XFS writes with dd" show very good
performances for 1GB blocks, but it drops down to average HDD
performances for 4MB blocks, and to only a few MB/s for 4kB blocks.
Changing the XFS block size did not help, so I tried fancy striping —
max block size is 256kB in XFS anyway.

First, using 4kB rados objects to store the 4kB stripes was awful,
because rados does not like small objects. Then, I used fancy striping
to store several 4kB stripes into a single 4MB object, but it hardly
improved the performance with 4kB blocks, while drastically degrading
the performance for large blocks.

Given my use-case, the block size of writes cannot exceed 4MB. I do
not know a lof of applications that write to disk by 1GB blocks.
Currently, on a 6-nodes, 54-OSDs cluster, with journal on dedicated
SAS disks and 10GbE dedicated uplink, I am getting performances
equivalent to a basic local disc.

So I am wondering: is it possible to have good performances with XFS
on rbd images, using a reasonable block size?

In case you think the answer is "yes", I would greatly appreciate it
if you could gave me a clue about the striping magic involved.

Best regards,

Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux