Re: replace osd with Octopus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> Give my above understanding, all-to-all is no difference from
> one-to-all. In either case, PGs of one disk are remapped to others.
> 
> I must be missing something seriously:)


It’s a bit subtle, but I think part of what Frank is getting at is that when OSDs are backfilled / recovered sequentially, some data ends up being moved more than once.  If one batches up such changes, data shouldn’t move an once.  There are other factors too, like decreased capacity from letting failed drives pile up, and the impact of peering if one activates a large number of OSDs at the same time.  Notably, one has to be careful when *removing* OSDs in batches to not cause PGs to go inactive.  

In Ceph there are often multiple ways to do a thing, with pros and cons.  Sometimes there’s value in keeping it simple, especially for execution by 24/7 NOC personnel who have a lot going on.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux