This was related to the caching layer, which doesnt support snapshooting per docs...for sake of closing the thread.
On 17 August 2015 at 21:15, Voloshanenko Igor <igor.voloshanenko@xxxxxxxxx> wrote:
Hi all, can you please help me with unexplained situation...All snapshot inside ceph broken...So, as example, we have VM template, as rbd inside ceph.We can map it and mount to check that all ok with itroot@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5/dev/rbd0root@test:~# parted /dev/rbd0 printModel: Unknown (unknown)Disk /dev/rbd0: 10.7GBSector size (logical/physical): 512B/512BPartition Table: msdosNumber Start End Size Type File system Flags1 1049kB 525MB 524MB primary ext4 boot2 525MB 10.7GB 10.2GB primary lvmThan i want to create snap, so i do:root@test:~# rbd snap create cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snapAnd now i want to map it:root@test:~# rbd map cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap/dev/rbd1root@test:~# parted /dev/rbd1 printWarning: Unable to open /dev/rbd1 read-write (Read-only file system). /dev/rbd1 has been opened read-only.Warning: Unable to open /dev/rbd1 read-write (Read-only file system). /dev/rbd1 has been opened read-only.Error: /dev/rbd1: unrecognised disk labelEven md5 different...root@ix-s2:~# md5sum /dev/rbd09a47797a07fee3a3d71316e22891d752 /dev/rbd0root@ix-s2:~# md5sum /dev/rbd1e450f50b9ffa0073fae940ee858a43ce /dev/rbd1Ok, now i protect snap and create clone... but same thing...md5 for clone same as for snap,,root@test:~# rbd unmap /dev/rbd1root@test:~# rbd snap protect cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snaproot@test:~# rbd clone cold-storage/0e23c701-401d-4465-b9b4-c02939d57bb5@new_snap cold-storage/test-imageroot@test:~# rbd map cold-storage/test-image/dev/rbd1root@test:~# md5sum /dev/rbd1e450f50b9ffa0073fae940ee858a43ce /dev/rbd1.... but it's broken...root@test:~# parted /dev/rbd1 printError: /dev/rbd1: unrecognised disk label=========tech details:root@test:~# ceph -vceph version 0.94.2 (5fb85614ca8f354284c713a2f9c610860720bbf3)We have 2 inconstistent pgs, but all images not placed on this pgs...root@test:~# ceph health detailHEALTH_ERR 2 pgs inconsistent; 18 scrub errorspg 2.490 is active+clean+inconsistent, acting [56,15,29]pg 2.c4 is active+clean+inconsistent, acting [56,10,42]18 scrub errors============root@test:~# ceph osd map cold-storage 0e23c701-401d-4465-b9b4-c02939d57bb5osdmap e16770 pool 'cold-storage' (2) object '0e23c701-401d-4465-b9b4-c02939d57bb5' -> pg 2.74458f70 (2.770) -> up ([37,15,14], p37) acting ([37,15,14], p37)root@test:~# ceph osd map cold-storage 0e23c701-401d-4465-b9b4-c02939d57bb5@snaposdmap e16770 pool 'cold-storage' (2) object '0e23c701-401d-4465-b9b4-c02939d57bb5@snap' -> pg 2.793cd4a3 (2.4a3) -> up ([12,23,17], p12) acting ([12,23,17], p12)root@test:~# ceph osd map cold-storage 0e23c701-401d-4465-b9b4-c02939d57bb5@test-imageosdmap e16770 pool 'cold-storage' (2) object '0e23c701-401d-4465-b9b4-c02939d57bb5@test-image' -> pg 2.9519c2a9 (2.2a9) -> up ([12,44,23], p12) acting ([12,44,23], p12)Also we use cache layer, which in current moment - in forward mode...Can you please help me with this.. As my brain stop to understand what is going on...Thank in advance!
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Andrija Panić
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com