optimizing non-ssd journals

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Our cluster is primarily used for RGW, but would like to use for RBD
eventually...

We don't have SSDs on our journals (for a while yet) and we're still
updating our cluster to 10GBE.

I do see some pretty high commit and apply latencies in 'osd perf'
often 100-500 ms, which figure is a result of the spinning journals.

Cluster consists of ~110 OSDs, 4 per node, on 2TB drives each, JBOD,
xfs with the associated 5GB  journal a second partition on each of
them:

/dev/sdb :
 /dev/sdb1 ceph data, active, cluster ceph, osd.35, journal /dev/sdb2
 /dev/sdb2 ceph journal, for /dev/sdb1
/dev/sdc :
 /dev/sdc1 ceph data, active, cluster ceph, osd.36, journal /dev/sdc2
 /dev/sdc2 ceph journal, for /dev/sdc1
...

Also they are mounted with:
osd mount options xfs = rw,noatime,inode64

+ 8 experimental btrfs osds, mounted with
osd_mount_options_btrfs = rw,noatime,space_cache,user_subvol_rm_allowed


Considering that SSDs are unlikely in near term, what can we do to
help commit/apply latency?

- Would increasing the size of the journal partition help?

- JBOD vs single-disk RAID0 - the drives are just JBODded now.
Research indicates i may see improvements with single-disk RAID0. Is
this information still current?

thanks-

-Ben
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux