Newbie Ceph Design Questions

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello,

On Fri, 19 Sep 2014 18:29:02 -0700 Craig Lewis wrote:

> I'm personally interested in running Ceph on some RAID-Z2 volumes with
> ZILs.  XFS feels really dates after using ZFS.  I need to check the
> progress, but I'm thinking of reformatting one node once Giant comes out.
> 
I'm looking forward to the results of this.

Personally I found ext4 to be faster than XFS in nearly all use cases and
the lack of full, real kernel integration of ZFS is something that doesn't
appeal to me either. 
Especially when Ceph usually lusts for the latest kernel, which of course
isn't supported yet by ZOL. ^o^
 
> 
> On Thu, Sep 18, 2014 at 6:36 AM, Christian Balzer <chibi at gol.com> wrote:
> 
> >
> > Hello,
> >
> > On Thu, 18 Sep 2014 13:07:35 +0200 Christoph Adomeit wrote:
> >
> >
> > > Presently we use Solaris ZFS Boxes as NFS Storage for VMs.
> > >
> > That sounds slower than I would Ceph RBD expect to be in nearly all
> > cases.
> >
> > Also, how do you replicate the filesystems to cover for node failures?
> >
> 
> I have used zfs snapshots and zfs send/receive in a cron.  It's not live
> replication, but it's fast enough that I could run it every 5 minutes,
> and maybe every minute.
> 
Yeah, a coworker does that on a FreeBSD pair here and since the data in
question are infrequently manually edited configuration files a low
replication rate is not a big issue as the changes could be easily re-done
on the other node if need be.

However it won't cut the mustard where real HA is required. 

> 
> > Next question: I read that in Ceph an OSD is marked invalid, as
> > > soon as its journaling disk is invalid. So what should I do ? I don't
> > > want to use 1 Journal Disk for each osd. I also dont want to use
> > > a journal disk per 4 osds because then I will loose 4 osds if an ssd
> > > fails. Using journals on osd Disks i am afraid will be slow.
> > > Again I am afraid of slow Ceph performance compared to zfs because
> > > zfs supports zil write cache disks .
> > >
> > I don't do ZFS, but it is my understanding that loosing the ZIL cache
> > (presumably on a SSD for speed reasons) will also potentially loose you
> > the latest writes. So not really all that different from Ceph.
> >
> 
> ZFS will lose only the data that was in the ZIL, but not on disk.  It
> requires admin intervention to tell ZFS to forget about the lost data.
> ZFS will allow you to read/write any data that was already on the disks.
> 


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux