Hi Graham,
On 06/29/2011 12:23 PM, Hemingway, Graham Stuart wrote:
Hello,
I have a simple ceph installation working. My cluster has 7 hardware nodes. 5 OSDs (1 2TB drive each) and two MDSs. I have one monitor which is on the same machine as one of the MDSs. Running on Ubuntu 10.10 and v0.30 I have things working. Yeah.
So, my questions are:
1) Would you recommend adding a second monitor on the same machine as the other MDS? Therefore two nodes each with MON and MDS. Does ceph still want monitors in odd numbers?
Generally the monitors don't need much cpu or memory, and it's a good
idea to run 3, so your cluster continues working if one fails. An even
number doesn't improve reliability since a majority of monitors need to
be active to make progress.
2) What is the procedure for adding an additional hard drive to each OSD? I have a second 2TB drive physically on each OSD node that I would like to bring up and I did not find ready documentation on just adding drives.
The simplest way to add a drive is to start a new OSD for the new drive.
Instructions for this are on the wiki:
http://ceph.newdream.net/wiki/OSD_cluster_expansion/contraction
Since your new drive is the same size as the others, you wouldn't need
to adjust your crush map to get an even distribution of data.
The downside of running one OSD per disk is increased memory and cpu
usage, but this shouldn't be a problem with a small cluster and few PGs.
Josh
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html