Re: CEPH Expansion

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi Craig!


For the moment I have only one node with 10 OSDs.
I want to add a second one with 10 more OSDs.

Each OSD in every node is a 4TB SATA drive. No SSD disks!

The data ara approximately 40GB and I will do my best to have zero
or at least very very low load during the expansion process.

To be honest I haven't touched the crushmap. I wasn't aware that I
should have changed it. Therefore, it still is with the default one.
Is that OK? Where can I read about the host level replication in CRUSH map in order to make sure that it's applied or how can I find if this is already enabled?

Any other things that I should be aware of?

All the best,


George


It depends.  There are a lot of variables, like how many nodes and
disks you currently have.  Are you using journals on SSD.  How much
data is already in the cluster.  What the client load is on the
cluster.

Since you only have 40 GB in the cluster, it shouldnt take long to
backfill.  You may find that it finishes backfilling faster than you
can format the new disks.

Since you only have a single OSD node, you mustve changed the crushmap
to allow replication over OSDs instead of hosts.  After you get the
new node in would be the best time to switch back to host level
replication.  The more data you have, the more painful that change
will become.

On Sun, Jan 18, 2015 at 10:09 AM, Georgios Dimitrakakis  wrote:

Hi Jiri,

thanks for the feedback.

My main concern is if its better to add each OSD one-by-one and
wait for the cluster to rebalance every time or do it all-together
at once.

Furthermore an estimate of the time to rebalance would be great!

Regards,


Links:
------
[1] mailto:giorgis@xxxxxxxxxxxx

--
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux