Re: Is ceph production ready? [was: Ceph PG Incomplete = Cluster unusable]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Lionel, Christian,

we do have the exactly same trouble as Christian,
namely

Christian Eichelmann [Fri, Jan 09, 2015 at 10:43:20AM +0100]:
> We still don't know what caused this specific error...

and

> ...there is currently no way to make ceph forget about the data of this pg and create it as an empty one. So the only way
> to make this pool usable again is to loose all your data in there. 

I wonder what is the position of ceph developers regarding
dropping (emptying) specific pgs?
Is that a use case that was never thought of or tested?

For us it is essential to be able to keep the pool/cluster
running even in case we have lost pgs.

Even though I do not like the fact that we lost a pg for
an unknown reason, I would prefer ceph to handle that case to recover to
the best possible situation.

Namely I wonder if we can integrate a tool that shows 
which (parts of) rbd images would be affected by dropping
a pg. That would give us the chance to selectively restore
VMs in case this happens again.

Cheers,

Nico

-- 
New PGP key: 659B 0D91 E86E 7E24 FD15  69D0 C729 21A1 293F 2D24
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux