Re: ceph for small cluster?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, Dec 31, 2012 at 9:14 AM, Miles Fidelman
<mfidelman@xxxxxxxxxxxxxxxx> wrote:
>
>
> Which raises another question: how are you combining drives within each OSD (raid, lvm, ?).
>

I'm not combining them, just running an OSD per data disk. On this
cluster it's 2 disks for each of the 3 nodes.  I ended up that way
only because I added the second disk to each node after getting
started. There was an inktank blog post not too long ago about the
performance of RAID'ed disks on OSDs that might provide quantitative
justification for which route to go.

Like Wido suggests, I also use a shared SSD for journal on each node.
The journal's not really about speeding recovery from failed
OSDs/disks, it's about being able to ACK writes faster and still
retain integrity when Bad Things happen. If you're RAIDing with a
battery-backed cache I think you can run without a journal, but I
don't know the details on that.

Matthew
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux