Now I try mount cve-backup again. It have mounted ok now and I copy out all data from it.
I can't continue use ceph in production now :(
It need very high expirence with ceph for fast detect place of error and fast repair it.
I try continue use it for data without critical avaliable (for example -backups).
2013/9/18 Laurent Barbe <laurent@xxxxxxxxxxx>
Sorry, I don't really know where is the problem.
I hope someone from the mailing you will be able to respond. I am interested to understand.
Laurent
Le 18/09/2013 18:08, Timofey Koolin a écrit :
uname -a
Linux sh13-1.s.f1f2.ru <http://sh13-1.s.f1f2.ru> 3.5.0-34-generic2013/9/18 Laurent Barbe <laurent@xxxxxxxxxxx <mailto:laurent@xxxxxxxxxxx>>
#55~precise1-Ubuntu SMP Fri Jun 7 16:25:50 UTC 2013 x86_64 x86_64 x86_64
GNU/Linux
ceph pg stat
v1371828: 1200 pgs: 999 active+clean, 57 active+remapped+wait_backfill,
1 active+recovering+remapped, 104 stale+active+clean, 4
active+remapped+backfilling, 30 active+recovery_wait+remapped, 5
active+degraded+remapped+backfilling; 874 GB data, 1739 GB used, 5893 GB
/ 7632 GB avail; 113499/1861522 degraded (6.097%); recovering 5 o/s,
4847KB/s
Blog: www.rekby.ru <http://www.rekby.ru>
Which kernel version are you using on client ?
Status of pgs ?
# uname -a
# ceph pg stat
Laurent
Le 18/09/2013 17:45, Timofey a écrit :
yes, format 1:
rbd info cve-backup | grep format
format: 1
no, about this image:
dmesg | grep rbd
[ 294.355188] rbd: loaded rbd (rados block device)
[ 395.515822] rbd1: unknown partition table
[ 395.515915] rbd: rbd1: added with size 0x1900000000
[ 1259.279812] rbd1: unknown partition table
[ 1259.279909] rbd: rbd1: added with size 0x1900000000
[ 1384.796308] rbd1: unknown partition table
[ 1384.796421] rbd: rbd1: added with size 0x40000000
[ 1982.570185] rbd1: unknown partition table
[ 1982.570274] rbd: rbd1: added with size 0x1900000000
messages about rbd1 - old. Now I haven't mapped and rbi image.
Yes. It is degraded still. Percent of degrade now 6%. It was
about 8 % when you reply to me first time.
I try map image from other client - it is mapped. Then I try map
it from first client - it is mapped too.
Before you ask I have tried it from different client - it isn't
mapped. I try reboot all servers in cluster - it didn't help.
After few minutes it is hung up (IO errors).
Cluster is still degraded ?
Do you have something in dmesg log ?
Are you sur using format 1 ? :
# rbd info cve-backup | grep format
Kernel version on client ?
Do you try to map it from an other client ?
Laurent
Le 18/09/2013 16:57, Timofey a écrit :
Yes, I see - it cve-backup
rados get -p rbd rbd_directory - | strings
cve-backup
...
old:
rados get -p rbd cve_lxc-backup.rbd - | strings
error getting rbd/cve_lxc-backup.rbd: No such file or
directory
new:
rados get -p rbd cve-backup.rbd - | strings
<<< Rados Block Device Image >>>
rb.0.51f77.2ae8944a
001.005
--
Blog: www.rekby.ru
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com