Re: Calculate recovery time

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



you can calculate the difference of count of pg on osd before and after to estimate the amount of data migrated.
Using the crush algorithm to calculate the difference of count of pg on osd without having to actually add or remove osd.

> Date: Thu, 18 Jun 2020 01:18:30 +0430
> From: Seena Fallah <seenafallah@xxxxxxxxx>
> Subject:  Re: Calculate recovery time
> To: Janne Johansson <icepic.dz@xxxxxxxxx>
> Cc: ceph-users <ceph-users@xxxxxxx>
> Message-ID:
> 	<CAK3+OmWxDZf_g0Ok5AEgtLWP+EujrwAQjauxx6J=xANmM7xchA@xxxxxxxxxxxxxx>
> Content-Type: text/plain; charset="UTF-8"
> 
> Yes I know but any point of view for backfill or priority used in Ceph when
> recovering?
> 
> On Wed, Jun 17, 2020 at 11:00 AM Janne Johansson <icepic.dz@xxxxxxxxx>
> wrote:
> 
> > Den ons 17 juni 2020 kl 02:14 skrev Seena Fallah <seenafallah@xxxxxxxxx>:
> >
> >> Hi all.
> >> Is there any way that I could calculate how much time it takes to add
> >> OSD to my cluster and get rebalanced or how much it takes to out OSD
> >> from my cluster?
> >>
> >
> > This is very dependent on all the variables of a cluster, from controller
> > & disk speeds, network speeds, cpu/bus speeds, ram availability and/or ram
> > allocation, the amount of copies the PGs and the pools are using, how many
> > other OSDs there are in the same crush rules as the missing/new one, how
> > full the OSDs are in general and the out:ed on specifically and of course
> > on if you have few huge objects in your datasets or if you have millions of
> > small ones. On top of that, it would be affected by the amount of client IO
> > being done at the same time, and in some small sense, might even depend
> > ever so slightly on the ability of the mons to react to changes for its own
> > database in case the mons are super slow.
> >
> > This would probably be why you will not just find a fixed number saying
> > "it will always take 5h45m for a 4TB drive". It is a problem that has 10 or
> > more dimensions.
> > But, you could always just out one. The cluster must be able to handle a
> > broken drive, so you might aswell test it now, instead of some weekend
> > night before that important database run someone at work needs done.
> >
> > You will see drives that break at some point, and if your dataset is
> > anything like everyone elses the last 50 or so years, your data will grow
> > so you just might want to get used to the "replace disk" and "add disk"
> > procedures right now.
> >

_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux