Re: Need info about ceph bluestore autorepair

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Den tors 6 feb. 2020 kl 15:06 skrev Mario Giammarco <mgiammarco@xxxxxxxxx>:

> Hello,
> if I have a pool with replica 3 what happens when one replica is corrupted?
>

The PG on which this happens will turn from active+clean to
active+inconsistent.


> I suppose ceph detects bad replica using checksums and replace it with good
> one
>

There is a "osd fix on error = true/false" setting (whose name I can't
remember right
off the bat now) which controls this. If false, you need to "ceph pg
repair" it, then
it happens as you describe.


> If I have a pool with replica 2 what happens?
>

Same.

Except with repl=2, you run a higher chance of surprises* on the remaining
replica
while the first one is bad until it gets repaired.

*) ie, data loss, tears and less sleep for ceph admins

-- 
May the most significant bit of your life be positive.
_______________________________________________
ceph-users mailing list -- ceph-users@xxxxxxx
To unsubscribe send an email to ceph-users-leave@xxxxxxx



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux