importance of steps when adding osds

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Suggestion first: ceph.com  site could have some best practice rules
for adding new OSDs. Googling regarding this topic reveals that people
have questions like:
- may I add serveral OSDSs at once?
- may I completely change crushmap online so that pgs get completely relocated?
- what config parameters help to reduce backfill load?

Until that still have this theoretical question on CRUSH algorithm.
We have ceph cluster with 5 osd hosts, CRUSH rule orders ceph to put
replicas one copy per host.

If we add 2 osds simultaneously in different hosts - how CRUSH
guarantees that some existing pg that now should be located on those
new 2 osds does not get unavailable? Should be something with epochs I
suppose?

Have found thread mentioning that people have tested completely
remapping pgs http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-July/019577.html
Still it is not clear what are the theoretical constraints on adding
bunches of OSDs (except backfill load). For example if pg gets
relocated several times in a row(in case osds get added not waiting
degradation to resolve) - how long that chain of previously allocated
pgs can be?


Any comments demistifying this are appreciated.

Best regards
Ugis
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html



[Index of Archives]     [CEPH Users]     [Ceph Large]     [Information on CEPH]     [Linux BTRFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]
  Powered by Linux