Re: rados export/import fail

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The pool ids can be updated to point to the correct pool [1] with
enough patience. The larger issue is that the snapshots are not
preserved and thus your cloned images can be corrupted if the parent
image was modified after the creation of the protected snapshot.

[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2015-May/001398.html

On Mon, Oct 16, 2017 at 8:11 AM, Wido den Hollander <wido@xxxxxxxx> wrote:
>
>> Op 16 oktober 2017 om 13:00 schreef Nagy Ákos <nagy.akos@xxxxxxxxxxxxxx>:
>>
>>
>> Thanks,
>>
>> but I erase all of the data, I have only this backup.
>
> I hate to bring the bad news, but it will not work. The pools have different IDs and that will make it very difficult to get this working again.
>
> Wido
>
>> If the restore work for 3 pools, I can do it for the remainig 2?
>>
>> What can I try to set, to import it or how I can find this IDs?
>>
>> 2017. 10. 16. 13:39 keltezéssel, John Spray írta:
>> > On Mon, Oct 16, 2017 at 11:35 AM, Nagy Ákos <nagy.akos@xxxxxxxxxxxxxx> wrote:
>> >> Hi,
>> >>
>> >> I want to upgrade my ceph from jewel to luminous, and switch to bluestore.
>> >>
>> >> For that I export the pools from old cluster:
>> > This is not the way to do it.  You should convert your OSDs from
>> > filestore to bluestore one by one, and let the data re-replicate to
>> > the new OSDs.
>> >
>> > Dumping data out of one Ceph cluster and into another will not work,
>> > because things like RBD images record things like the ID of the pool
>> > where their parent image is, and pool IDs are usually different
>> > between clusters.
>> >
>> > John
>> >
>> >> rados export -p pool1 pool1.ceph
>> >>
>> >> and after upgrade and osd recreation:
>> >>
>> >> rados --create -p pool1 import pool1.ceph
>> >>
>> >> I can import the backup without error, but when I want  to map an image, I
>> >> got error:
>> >>
>> >> rbd --image container1 --pool pool1 map
>> >>
>> >> rbd: sysfs write failed
>> >> In some cases useful info is found in syslog - try "dmesg | tail".
>> >> rbd: map failed: (2) No such file or directory
>> >>
>> >> dmesg | tail
>> >>
>> >> [160606.729840] rbd: image container1 : WARNING: kernel layering is
>> >> EXPERIMENTAL!
>> >> [160606.730675] libceph: tid 86731 pool does not exist
>> >>
>> >>
>> >> When I try to get info about the image:
>> >>
>> >> rbd info pool1/container1
>> >>
>> >> 2017-10-16 13:18:17.404858 7f35a37fe700 -1
>> >> librbd::image::RefreshParentRequest: failed to open parent image: (2) No
>> >> such file or directory
>> >> 2017-10-16 13:18:17.404903 7f35a37fe700 -1 librbd::image::RefreshRequest:
>> >> failed to refresh parent image: (2) No such file or directory
>> >> 2017-10-16 13:18:17.404930 7f35a37fe700 -1 librbd::image::OpenRequest:
>> >> failed to refresh image: (2) No such file or directory
>> >> rbd: error opening image container1: (2) No such file or directory
>> >>
>> >>
>> >> I check to exported image checksum after export and before import, and it's
>> >> match, and I can restore three pools with one with 60 MB one with 1.2 GB and
>> >> one with 25 GB of data.
>> >>
>> >> The problematic has 60 GB data.
>> >>
>> >> The pool store LXD container images.
>> >>
>> >> Any help is highly appreciated.
>> >>
>> >> --
>> >> Ákos
>> >>
>> >>
>> >> _______________________________________________
>> >> ceph-users mailing list
>> >> ceph-users@xxxxxxxxxxxxxx
>> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>> >>
>>
>> --
>> Ákos
>>
>>
>> _______________________________________________
>> ceph-users mailing list
>> ceph-users@xxxxxxxxxxxxxx
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



-- 
Jason
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux