Re: Current best practice for migrating from one EC profile to another?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tis 28 juli 2020 kl 18:50 skrev David Orman <ormandj@xxxxxxxxxxxx>:

> Hi,
>
> As we expand our cluster (adding nodes), we'd like to take advantage of
> better EC profiles enabled by higher server/rack counts. I understand, as
> Ceph currently exists (15.2.4), there is no way to live-migrate from one EC
> profile to another on an existing pool, for example, from 4+2 to 17+3 when
> going from 7 nodes to 21. Is this correct?
>

..depends on how brave you are, but mostly it's true.

Mostly people stick to the configured EC size(s) on the pools even if you
buy
more nodes, the EC profile doesn't have to be close to the number of nodes,
it
can well be 4+2 even with 21 nodes, it will just choose 6 random nodes out
of
the 21 for each PG and spread out evenly in your new larger cluster.

Also, 17+3 means 20 nodes are involved in every reasonably sized IO, and
that
may not be optimal, both in terms of CPU usage, and of course that if your
previous
speed was ok when 6 nodes were needed to reassemble a piece of data, you
can now do 3 of those 4+2-IOs in parallel if your network allows.

Also, with 17+3 on 21 nodes, then a two node failure means it is
continually degraded*
until a host comes back up, but with k+m being < 15 would mean that the
cluster can
repair itself onto those ~15 nodes and still be in full swing even if up to
6 nodes would
be lost, however improbable such a failure would be.

*) So the +3 normally means three hosts could fail, but having only 19
hosts on a 17+3
   would mean it can never reach active+clean for any PG, since there
aren't 20 hosts to
    use. You wouldn't lose data, but all PGs would be degraded/undersized
meaning you
    run the whole cluster at 17+2 at best.

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux