Generally speaking, you are correct. Adding more OSDs at once is more
efficient than adding fewer at a time.
That being said, do so carefully. We typically add OSDs to our clusters
either 32 or 64 at once, and we have had issues on occasion with bad
drives. It's common for us to have a drive or two go bad within 24
hours or so of adding them to Ceph, and if multiple drives fail in
multiple failure domains within a short amount of time, bad things can
happen. The efficient, safe approach is to add as many drives as
possible within a single failure domain, wait for recovery, and repeat.
On Tue, 2017-03-21 at 19:56 +0100, mj wrote:
> Hi,
>
> Just a quick question about adding OSDs, since most of the docs I
> can
> find talk about adding ONE OSD, and I'd like to add four per server
> on
> my three-node cluster.
>
> This morning I tried the careful approach, and added one OSD to
> server1.
> It all went fine, everything rebuilt and I have a HEALTH_OK again
> now.
> It took around 7 hours.
>
> But now I started thinking... (and that's when things go wrong,
> therefore hoping for feedback here....)
>
> The question: was I being stupid to add only ONE osd to the server1?
> Is
> it not smarter to add all four OSDs at the same time?
>
> I mean: things will rebuild anyway...and I have the feeling that
> rebuilding from 4 -> 8 OSDs is not going to be much heavier than
> rebuilding from 4 -> 5 OSDs. Right?
>
> So better add all new OSDs together on a specific server?
>
> Or not? :-)
>
> MJ
>
efficient than adding fewer at a time.
That being said, do so carefully. We typically add OSDs to our clusters
either 32 or 64 at once, and we have had issues on occasion with bad
drives. It's common for us to have a drive or two go bad within 24
hours or so of adding them to Ceph, and if multiple drives fail in
multiple failure domains within a short amount of time, bad things can
happen. The efficient, safe approach is to add as many drives as
possible within a single failure domain, wait for recovery, and repeat.
On Tue, 2017-03-21 at 19:56 +0100, mj wrote:
> Hi,
>
> Just a quick question about adding OSDs, since most of the docs I
> can
> find talk about adding ONE OSD, and I'd like to add four per server
> on
> my three-node cluster.
>
> This morning I tried the careful approach, and added one OSD to
> server1.
> It all went fine, everything rebuilt and I have a HEALTH_OK again
> now.
> It took around 7 hours.
>
> But now I started thinking... (and that's when things go wrong,
> therefore hoping for feedback here....)
>
> The question: was I being stupid to add only ONE osd to the server1?
> Is
> it not smarter to add all four OSDs at the same time?
>
> I mean: things will rebuild anyway...and I have the feeling that
> rebuilding from 4 -> 8 OSDs is not going to be much heavier than
> rebuilding from 4 -> 5 OSDs. Right?
>
> So better add all new OSDs together on a specific server?
>
> Or not? :-)
>
> MJ
>
Steve Taylor |
Senior Software Engineer |
StorageCraft
Technology Corporation 380 Data Drive Suite 300 | Draper | Utah | 84020 Office: 801.871.2799 | |
If you are not the intended recipient of this message or received it erroneously, please notify the sender and delete it, together with any attachments, and be advised that any dissemination or copying of this message is prohibited. |
_______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com