Hi every one,
I am currently testing a use-case with large rbd images (several TB),
each containing an XFS filesystem, which I mount on local clients. I
have been testing the throughput writing on a single file in the XFS
mount, using "dd oflag=direct", for various block sizes.
With a default config, the "XFS writes with dd" show very good
performances for 1GB blocks, but it drops down to average HDD
performances for 4MB blocks, and to only a few MB/s for 4kB blocks.
Changing the XFS block size did not help, so I tried fancy striping —
max block size is 256kB in XFS anyway.
First, using 4kB rados objects to store the 4kB stripes was awful,
because rados does not like small objects. Then, I used fancy striping
to store several 4kB stripes into a single 4MB object, but it hardly
improved the performance with 4kB blocks, while drastically degrading
the performance for large blocks.
Given my use-case, the block size of writes cannot exceed 4MB. I do not
know a lof of applications that write to disk by 1GB blocks. Currently,
on a 6-nodes, 54-OSDs cluster, with journal on dedicated SAS disks and
10GbE dedicated uplink, I am getting performances equivalent to a basic
local disc.
So I am wondering: is it possible to have good performances with XFS on
rbd images, using a reasonable block size?
In case you think the answer is "yes", I would greatly appreciate it if
you could gave me a clue about the striping magic involved.
Best regards,
Nicolas Canceill
Scalable Storage Systems
SURFsara (Amsterdam, NL)
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com