Ceph-btrfs layout

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello.

I'm new to ceph, so suggestions and guidance and advice as to
what works and what fails is most appreciated.


Hardware
(3) FX8350, 32G ram, (2)-2T drives each, running gentoo.
Each system has btrfs-raid one set up like so:

Disklabel type: gpt
Device           Start          End   Size Type
/dev/sda1         2048         8191     3M BIOS boot partition
/dev/sda2         8192      1024000   496M Linux filesystem
/dev/sda3      1026048   3907029134   1.8T Linux filesystem

Found valid GPT with protective MBR; using GPT.

# blkid
/dev/loop0: TYPE="squashfs" 
/dev/sda1: UUID="85cd9d86-4f4d-4113-b14e-cf5339373e20" TYPE="ext4"
PARTLABEL="grub2biosboot" PARTUUID="f88a8259-a4e4-4db8-86df-e709d135fe47" 
/dev/sda2: LABEL="BOOT" UUID="d67a8d19-64bc-4ee1-bebf-48c935b039fa"
UUID_SUB="3eb62dd8-3f07-440f-8606-0c6d99362f6e" TYPE="btrfs"
PARTLABEL="boot" PARTUUID="8a6f7b5f-28a8-4f87-938f-386a93ebe07f" 
/dev/sda3: LABEL="BTROOT" UUID="b7753366-a9a9-4074-8e0e-3beea50fee56"
UUID_SUB="e546ce31-098f-4897-bffd-6c5628f6b62e" TYPE="btrfs"
PARTLABEL="root" PARTUUID="6a8fa54b-3d58-4ac5-8784-6d540f2e65fc" 
/dev/sdb2: LABEL="BOOT" UUID="d67a8d19-64bc-4ee1-bebf-48c935b039fa"
UUID_SUB="02034edf-c537-4fc6-9375-1599e8af2737" TYPE="btrfs"
PARTLABEL="boot" PARTUUID="b7b88ea7-b59a-4a4d-b857-4f55a1be3830" 
/dev/sdb3: LABEL="BTROOT" UUID="b7753366-a9a9-4074-8e0e-3beea50fee56"
UUID_SUB="8a76be85-6106-47ea-90ae-756fb8c37bf1" TYPE="btrfs"
PARTLABEL="root" PARTUUID="3c2c6f88-a1da-40de-83be-21af71a5ce26" 
/dev/sr0: UUID="2014-08-28-06-08-20-22" LABEL="Gentoo Linux amd64 20140828"
TYPE="iso9660" PTUUID="1047d058" PTTYPE="dos" 
/dev/sdb1: PARTLABEL="grub2biosboot"
PARTUUID="3c7a0935-57d4-4bff-a492-aaa261e62212" 



Goals:

To set up a simple (3) nodes with cephfs for experimentation with
cluster codes; Mesos and Spark being first in line. Right now I'm 
working to get v 0.87 (Giant) into gentoo as an overlay. Comments
on witch version of Ceph to use, including the daily (9999) version
are welcome.

After reading, it seem that COW operations are still troublesome
with ceph? So using a raid1 on each node with btrfs will allow me
to turn off COW if/when those sorts of issues arise.

What I need help with right now is setting up the UUID based /etc/fstab
and suggestions on exactly how to configure ceph(fs).

My desire is to keep the btrfs-gentoo installs stable but to be able
to use ansible or other (ceph based tools) to reconfigure ceph or recover
from ceph failures.



James

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux