Re: blueprint: osd: ceph on zfs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sun, 4 Aug 2013, Noah Watkins wrote:
> I was thinking along the lines of if it made sense to multi-purpose
> the BackFileSystem for non-Linux portability. In which case even
> things like posix_fallocate, xattr access, etc.. might fit in well,
> which may have equivalent functionality, under a different name.

Yeah.

In the fiemap case, incidentally, we probably are better off using 
SEEK_DATA/SEEK_HOLE for the extentmap stuff, since FIEMAP is/was buggy in 
many common kernels, and only gives you meaningful data after an fsync().  
Currently it's disabled.

sage


> 
> On Sun, Aug 4, 2013 at 5:47 PM, Yan, Zheng <ukernel@xxxxxxxxx> wrote:
> > On Mon, Aug 5, 2013 at 7:39 AM, Noah Watkins <noah.watkins@xxxxxxxxxxx> wrote:
> >> It seems to make sense that fiemap should be part of the `class
> >> BackingFileSystem` abstraction?
> >>
> >
> > FS_IOC_FIEMAP is a standard API, I think no need to implement it in `class
> > BackingFileSystem`.
> >
> >
> >> On Thu, Jul 25, 2013 at 4:53 PM, Sage Weil <sage@xxxxxxxxxxx> wrote:
> >>> http://wiki.ceph.com/01Planning/02Blueprints/Emperor/osd:_ceph_on_zfs
> >>>
> >>> We've done some preliminary testing and xattr debugging that allows
> >>> ceph-osd to run on zfsforlinux using the normal writeahead journaling mode
> >>> (the same mode used for xfs and ext4).  However, we aren't doing anything
> >>> special to take advantage of zfs's capabilities.
> >>>
> >>> This session would go over what is needed to make parallel journaling work
> >>> (which would leverage zfs snapshots).  I would also like to have a
> >>> discussion about whether other longer-term possibilities, such as storing
> >>> objects directly using the DMU, make sense given what ceph-osd's
> >>> ObjectStore interface really needs.  It might also be an appropriate time
> >>> to visit whether other snapshotting linux filesystems (like nilfs2) would
> >>> fit well into any generalization of the filestore code that comes out of
> >>> this effort.
> >>>
> >>> If anybody is interested in this, please add yourself to the interested
> >>> parties section (or claim ownership) of this blueprint!
> >>>
> >>>
> >>> --
> >>> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >>> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >>> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> >> the body of a message to majordomo@xxxxxxxxxxxxxxx
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
> 
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html




[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux