Re: ceph on XFS

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Christian,

I just switched testing from btrfs to xfs, because I also have the problem with the extreme slowdown with btrfs after a short while.

The performance and stability I'm getting with xfs at the moment is very good.

I'm testing two setups:
- OSD per disk with XFS with two nodes and 12 disks per node. An intermediate server that acts as gateway, mounting the cephfs and exposing it with rsync daemon. Doing 20 rsyncs to different directories on the cephfs (backups of linux servers with about 100GB of data each). - OSD per disk with XFS with two nodes and 12 disks per node. An intermediate server that acts as a gateway, connecting to 20 rbd's and formatted as ext3 filesystems. Those filesystems get exposed with a rsync daemon. Same load as with the first setup.

Both nodes have a ssd for journaling (partitioned in 12 partitions, for each osd).

Both setups perform really good. It's a gigabit network and with the first setup I get about 50MB/s average, the second setup goes faster, but I don't have any concrete figures at the moment.

In the past I had a lot of trouble with btrfs, it crashed a lot. But with the 3.2.1 kernel those crashes disappeared.

Stefan

On 01/27/2012 09:48 PM, Christian Brunner wrote:
Hi,

reading the list archives, I get the impression that XFS is the second
best alternative to btrfs. But when I start an ceph-osd on an XFS
volume, there is still a big warning:

WARNING: not btrfs or ext3.  We don't currently support file systems other
              than btrfs and ext3 (data=journal or data=ordered).  Data may be
              lost in the event of a crash.

I know that I can't use btrfs snapshots, but is it really that bad?

I'm running a recent RHEL6.2 kernel now, that has all those wonderful
optimizations, Dave Chinner was talking about.

Thanks,
Christian
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux