Yes I know but any point of view for backfill or priority used in Ceph when recovering? On Wed, Jun 17, 2020 at 11:00 AM Janne Johansson <icepic.dz@xxxxxxxxx> wrote: > Den ons 17 juni 2020 kl 02:14 skrev Seena Fallah <seenafallah@xxxxxxxxx>: > >> Hi all. >> Is there any way that I could calculate how much time it takes to add >> OSD to my cluster and get rebalanced or how much it takes to out OSD >> from my cluster? >> > > This is very dependent on all the variables of a cluster, from controller > & disk speeds, network speeds, cpu/bus speeds, ram availability and/or ram > allocation, the amount of copies the PGs and the pools are using, how many > other OSDs there are in the same crush rules as the missing/new one, how > full the OSDs are in general and the out:ed on specifically and of course > on if you have few huge objects in your datasets or if you have millions of > small ones. On top of that, it would be affected by the amount of client IO > being done at the same time, and in some small sense, might even depend > ever so slightly on the ability of the mons to react to changes for its own > database in case the mons are super slow. > > This would probably be why you will not just find a fixed number saying > "it will always take 5h45m for a 4TB drive". It is a problem that has 10 or > more dimensions. > But, you could always just out one. The cluster must be able to handle a > broken drive, so you might aswell test it now, instead of some weekend > night before that important database run someone at work needs done. > > You will see drives that break at some point, and if your dataset is > anything like everyone elses the last 50 or so years, your data will grow > so you just might want to get used to the "replace disk" and "add disk" > procedures right now. > > -- > May the most significant bit of your life be positive. > _______________________________________________ ceph-users mailing list -- ceph-users@xxxxxxx To unsubscribe send an email to ceph-users-leave@xxxxxxx