18.04.2013 10:49, Wolfgang Hennerbichler пишет:
Ceph doesn't support data stripes, and you probably also don't need it.
Ceph distributes reads of data anyways, because large objects are spread
automatically to the OSDs, reads happen concurrently, this is somehow
like striping, but better :)
Well... May be I saying something wrong, but for small cluster (one
node, actually, 8 drives for OSDs) when I mount cephfs and checking FS
performance, I see excellent read performance, but poor random write
performance. I run test IO with 4k i blocks, so I thought that problem
is default strip block size, but I couldn't find any documentation how
to change it.
Just for reference: on 1G network with 8 OSD (8 HDD) I got over 1k IOPS
on reading and just 30 IOPS on writing. And atop shows that OSD's disks
is underutilized...
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com