Re: Questions/comments on using ZFS for OSDs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 11/12/2013 04:43 PM, Eric Eastman wrote:
I built Ceph version 0.72 with --with-libzfs on Ubuntu 1304 after
installing ZFS
from th ppa:zfs-native/stable repository. The ZFS version is v0.6.2-1

I do have a few questions and comments on Ceph using ZFS backed OSDs

As ceph-deploy does not show support for ZFS, I used the instructions at:
http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
and hand created a new OSD on an existing Ceph system. I guest that I
needed to build a zpool out of a disk, and then create a ZFS file system
that mounted to  /var/lib/ceph/osd/ceph-X, where X was the number given
when I ran the ceph osd create command.  As I am testing on a VM, I
created 2 new disks, one 2GB (/dev/sde) for journal and one 32GB
(/dev/sdd) for data. To setup the system for ZFS based OSDs, I first
added to all my ceph.conf files:

    filestore zfs_snap = 1
    journal_aio = 0
    journal_dio = 0

I then created the OSD with the commands:

# ceph osd create
4
# parted -s /dev/sdd mklabel gpt mkpart -- -- 1 \-1
# parted -s /dev/sde mklabel gpt mkpart -- -- 1 \-1
# zpool create sdd /dev/sdd
# mkdir /var/lib/ceph/osd/ceph-4
# zfs create -o mountpoint=/var/lib/ceph/osd/ceph-4 sdd/ceph-4
# ceph-osd  -i 4 --mkfs --mkkey --osd-journal=/dev/sde1 --mkjournal
# ceph auth add osd.4 osd 'allow *' mon 'allow rwx' -i
/var/lib/ceph/osd/ceph-4/keyring

I then decompiled the crush map, added osd.4, and recompiled the map,
and set Ceph to use the new crush map.

When I started the osd.4 with:

# start ceph-osd id=4

It failed to start, as the ceph osd log file indicated the journal was
missing:
      mount failed to open journal /var/lib/ceph/osd/ceph-4/journal: (2)
No such file or directory

So I manually created a link named journal to /dev/sde1 and created the
journal_uuid file.  Should ceph-osd have done this step?  Is there
anything else I may of missed?

With limited testing, the ZFS backed OSD seems to function correctly.

I was wondering if there are any ZFS file system options that should be
set for better performance or data safety.

You may want to try using SA xattrs. This resulted in a measurable performance improvement when I was testing Ceph on ZFS last spring.


It would be nice if ceph-deploy would handle ZFS.

Lastly, I want to thank Yan, Zheng and all the rest who worked on this
project.

Eric

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux