Some question about data placement

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I still lost at documentation.

Let's assume I has 8 osd's on single server (osd.[0-7]). I use cephfs and want to has redundancy 2 (means each peace of data on two osd's) and spanning of the file across all OSD's (to get some performance on writing).

My expectation: 8x speed on reading, 4x speed on writing (compare to single drive). [I put aside some overhead]

I'm checking performance of random writes and reads to single file on mounted cephfs (fio, iodepth=32, blocksize=4k) and I'm getting nice read performance (1000 IOPS = 125x8, as expected) and just and only 30 iops on writing. Less then half of single drive performance.

I want to understand what I'm doing wrong.

My settings (for all OSDs they are same, but with different disk name):

[osd.1]
        host = testserver
        devs = /dev/sdb
        osd mkfs type = xfs

I tried to change CRUSH map: "step choose firstn 2 type osd" (for 'data' rule, compare to default), but no effect.

I think here is some huge mistake I making... I need to say 'no more than two copies of data' and 'block size = 4k when stripping'.

Please help.

Thanks.

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux