Different filesystems on OSD hosts at the same cluster

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi!

We do some performance tests on our small Hammer install:
 - Debian Jessie;
 - Ceph Hammer 0.94.2 self-built from sources (tcmalloc)
 - 1xE5-2670 + 128Gb RAM
 - 2 nodes shared with mons, system and mon DB are on separate SAS mirror;
 - 16 OSD on each node, SAS 10k;
 - 2 Intel DC S3700 200Gb SSD for journalling 
 - 10Gbit interconnect, shared public and cluster metwork, MTU9100
 - 10Gbit client host, fio 2.2.7 compiled with RBD engine

We benchmark 4k random read performance on 500G RBD volume with fio-rbd 
and got different results. When we use XFS (noatime,attr2,inode64,allocsize=4096k,
noquota) on OSD disks, we can get ~7k sustained iops. After recreating the same OSDs
with EXT4 fs (noatime,data=ordered) we can achieve ~9.5k iops in the same benchmark.

So there are some questions to community:
 1. Is really EXT4 perform better under typical RBD load (we Ceph to host VM images)?
 2. Is it safe to intermix OSDs with different backingstore filesystems at one cluster 
(we use ceph-deploy to create and manage OSDs)?
 3. Is it safe to move our production cluster (Firefly 0.80.7) from XFS to ext4 by
removing XFS osds one-by-one and later add the same disk drives as Ext4 OSDs
(of course, I know about huge data-movement that will take place during this process)?

Thanks!

Megov Igor
CIO, Yuterra

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux