Migrate whole clusters

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Assuming you have the spare throughput-/IOPS for Ceph to do its thing
without disturbing your clients, this will work fine.
-Greg

On Tuesday, May 13, 2014, Gandalf Corvotempesta <
gandalf.corvotempesta at gmail.com> wrote:

> 2014-05-13 21:21 GMT+02:00 Gregory Farnum <greg at inktank.com <javascript:;>
> >:
> > You misunderstand. Migrating between machines for incrementally
> > upgrading your hardware is normal behavior and well-tested (likewise
> > for swapping in all-new hardware, as long as you understand the IO
> > requirements involved). So is decommissioning old hardware. But if you
> > only care about (for instance, numbers pulled out of thin air) 30GB
> > out of 100TB of data in the cluster, it will be *faster* to move only
> > the 30GB you care about, instead of rebalancing all the data in the
> > cluster across to new machines. :)
>
> Subject on this thread is : "migrate WHOLE cluster", so, I meant to migrate
> THE WHOLE CLUSTER not only a part of it.
>
> If my cluster is made by 100TB, I have to migrate 100TB of datas.
>
> So, can I manually replace all mons and osds one per time?
> For example: add 1 mon, remove 1 mon, add 1 mon, remove 1 mon and so
> on until all mons are replace.
> Then: add 1 osd, wait for rebalance, remove 1 osd, wait for rebalance
> and so on ultil all OSD are migrated.
>
> This should work with no downtime and no data loss.
>


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140513/0f50f188/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux