On Wed, Apr 24, 2013 at 07:49:40AM -0500, Mark Nelson wrote: > On 04/24/2013 05:18 AM, Maik Kulbe wrote: > Any idea if this was more due to OCFS2 or more due to Ceph? I > confess I don't know much about how OCFS2 works. Is it doing some > kind of latency sensitive operation when two files are being written > per directory? I'm using OCFS2 on a native infrastructure (no VM's, no Ceph). It shows the same behaviour. I've never looked into that because it's not an issue for our type of use. > > > >At the moment I'm trying a solution that uses RBD with a normal FS like > >EXT4 or ZFS and where two server export that block device via NFS(with > >heartbeat for redundancy and failover) but that involves problems with > >file system consistency. > > > >My question here is, what kind of software stack other users here would > >suggest for this kind of workload? Maybe glusterfs helps, but in reality, this does never perform very well (except for NFS). GlusterFS turns out to be slow, too. -- http://www.wogri.com _______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com