Hi Jason,
I've followed your steps and now I can list all available data blocks of my image, but I don't know how rebuild a sparse image, I found this script (https://raw.githubusercontent.com/smmoore/ceph/master/rbd_restore.sh) and https://www.sebastien-han.fr/blog/2015/01/29/ceph-recover-a-rbd-image-from-a-dead-cluster/ but I don't know if this can help me.
Any suggestions?
Thanks.
2016-09-21 22:35 GMT+02:00 Jason Dillaman <jdillama@xxxxxxxxxx>:
Unfortunately, it sounds like the image's header object was lost
during your corruption event. While you can manually retrieve the
image data blocks from the cluster, undoubtedly many might be lost
and/or corrupted as well.
You'll first need to determine the internal id of your image:
$ rados --pool images getomapval rbd_directory
name_07e54256-d123-4e61-a23a-7f8008340751
value (16 bytes) :
00000000 0c 00 00 00 31 30 31 34 31 30 39 63 66 39 32 65 |....1014109cf92e|
00000010
In my example above, the image id (1014109cf92e in this case) is the
string starting after the first four bytes (the id length). I can then
use the rados tool to list all available data blocks:
$ rados --pool images ls | grep rbd_data.1014109cf92e | sort
rbd_data.1014109cf92e.0000000000000000
rbd_data.1014109cf92e.000000000000000b
rbd_data.1014109cf92e.0000000000000010
The sequence of hex numbers at the end of each data object is the
object number and it represents the byte offset within the image (4MB
* object number = byte offset assuming default 4MB object size and no
fancy striping enabled).
You should be able to script something up to rebuild a sparse image
with whatever data is still available in your cluster.
> ______________________________
On Wed, Sep 21, 2016 at 11:12 AM, Fran Barrera <franbarrera6@xxxxxxxxx> wrote:
> Hello,
>
> I have a Ceph Jewel cluster with 4 osds and only one monitor integrated with
> Openstack Mitaka.
>
> Two OSD were down, with a service restart one of them was recovered. The
> cluster began to recover and was OK. Finally the disk of the other OSD was
> corrupted and the solution was a format and recreate the OSD.
>
> Now I have the cluster OK, but the problem now is with some of the images
> stored in Ceph.
>
> $ rbd list -p images|grep 07e54256-d123-4e61-a23a-7f8008340751
> 07e54256-d123-4e61-a23a-7f8008340751
>
> $ rbd export -p images 07e54256-d123-4e61-a23a-7f8008340751 /tmp/image.img
> 2016-09-21 17:07:00.889379 7f51f9520700 -1 librbd::image::OpenRequest:
> failed to retreive immutable metadata: (2) No such file or directory
> rbd: error opening image 07e54256-d123-4e61-a23a-7f8008340751: (2) No such
> file or directory
>
> Ceph can list the image but nothing more, for example an export. So
> Openstack can not retrieve this image. I try repair the pg but appear's ok.
> Is there any solution for this?
>
> Kind Regards,
> Fran.
>
_________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph. com
>
--
Jason
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com