Re: Understanding Ceph in case of a failure

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi,

Am 20.03.2017 um 05:34 schrieb Christian Balzer:
> you do realize that you very much have a corner case setup there, right?

Yes, I know that this is not exactly a recommendation, but I hoped it
would be good enough for the start :-).

> That being said, if you'd search the archives, a similar question was
> raised by me a long time ago.

Do you have some sort of reference to this? Sounds interesting, but
couldn't find a particular thread, and you posted quite a lot on this
list already :-).

> The new CRUSH map of course results in different computations of where PGs
> should live, so they get copied to their new primary OSDs.
> This is the I/O you're seeing and that's why it stops eventually.

Hm, ok, that might be an explanation. Haven't considered the fact that
it gets removed from the CRUSH map and a new location is calculated. Is
there a way to prevent this in my case?

Thank you very much for your insights!

Best regards,
Karol Babioch

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux