Temporary degradation when adding OSD's

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thursday, July 10, 2014, Erik Logtenberg <erik at logtenberg.eu> wrote:

>
> > Yeah, Ceph will never voluntarily reduce the redundancy. I believe
> > splitting the "degraded" state into separate "wrongly placed" and
> > "degraded" (reduced redundancy) states is currently on the menu for
> > the Giant release, but it's not been done yet.
>
> That would greatly improve the accuracy of ceph's status reports.
>
> Does ceph currently know about the difference of these states well
> enough to be smart with prioritizing? Specifically, if I add an OSD and
> ceph starts moving data around, but during that time an other OSD fails;
> is ceph smart enough to quickly prioritize reduplicating the lost copies
> before continuing to move data around (that was still perfectly
> duplicated)?
>

I believe that when choosing the next PG to backfill, OSDs prefer PGs which
are undersized. But it won't stop replicating a PG if one goes undersized
mid-process, and it's not a guarantee anyway because backfill is
distributed over the cluster, but the decisions have to be made locally.
(So a backfilling OSD which has no undersized PGs might beat out an OSD
with undersized PGs to get the "reservation".)
-Greg


-- 
Software Engineer #42 @ http://inktank.com | http://ceph.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ceph.com/pipermail/ceph-users-ceph.com/attachments/20140710/17bf2c36/attachment.htm>


[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux