OSD Journal creation ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



All - I am building my first ceph cluster, and doing it "the hard way",
manually without the aid of "ceph-deploy".  I have successfully built the
mon cluster and am now adding OSDs.

My main question:
How do I prepare the "Journal" prior to the prepare/activate stages of the
OSD creation?  


More details:
Basically - all of the documentation seems to "assume" the journal is
"prepared".   Do I simply create a single raw partition on a physical
device and the "ceph-disk prepare..." and "ceph-disk activate..." steps
will "take care" of everything for the journal ... presumably based on the
"ceph-disk prepare ... --type" filesystem setting?  Or do I need to
actually format it as a filesystem prior to giving it over to the Ceph OSD
???   

The architecture I'm thinking of is as follows - based on the hardware I
have for OSDs (currenly 9 servers each with):

  RAID 0 mirror for OS hard drives (2 disks)
  data disk for journal placement for 5 physical disks (4TB)
  data disk for journal placement for 5 physical disks (4TB)
  10 data disks as OSDs (one OSD per disk) (4TB each)

Essentially - there are "12 data disks" in the node (all 4 TB 7200 rpm
spinning disks).  Splitting the Journal across two of them gives me a
failure domain of "5 data disks + 1 journal disk" in a single physical
server for crush map purposes ...  It also vaguely helps spread the I/O
workload for the journaling activity across 2 physical disks in a chassis
instead of a one (since the journal disk is "pretty darn slow").

In this configuration I'd create 5 separate partitions on Journal Disk A
and 5 on Journal Disk B ... but do they need to be formatted and mounted?

Yes, we know as we go to more "real production" workloads, we'll want/need
to change this for performance reasons - eg the Journal on SSDs ...

Any pointers on where I missed this info in the documentation would be
helpful too ... I've been all over the ceph.com/docs/ site and haven't
found it yet... 

Thanks,
~~shane 

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux