Your OSDs are full. The cluster will block, until space is freed up and
both OSDs leave full state.
You have 2 OSDs, so I'm assuming you are running replica size of 2? A
quick (but risky) method might be to reduce your replica down to 1, to
get the cluster unblocked, clean up space, then go back to replica size
2.
On 2015-01-02 13:44, Max Power wrote:
After I tried to copy some files into a rbd device I ran into a "osd
full"
state. So I restarted my server and wanted to remove some files from
the
filesystem again. But at this moment I cannot execute "rbd map" anymore
and I do
not know why.
This all happened in my testing environment and this is the current
state with
'ceph status'
health HEALTH_ERR
2 full osd(s)
monmap e1: 1 mons at {test1=10.0.0.141:6789/0}
election epoch 1, quorum 0 test1
osdmap e69: 2 osds: 2 up, 2 in
flags full
pgmap v469: 100 pgs, 1 pools, 1727 MB data, 438 objects
3917 MB used, 156 MB / 4073 MB avail
100 active+clean
strace reports this before 'rbd map pool/disk' hangs
[...]
access("/sys/bus/rbd", F_OK) = 0
access("/run/udev/control", F_OK) = 0
socket(PF_NETLINK, SOCK_RAW|SOCK_CLOEXEC|SOCK_NONBLOCK,
NETLINK_KOBJECT_UEVENT)
= 3
setsockopt(3, SOL_SOCKET, SO_ATTACH_FILTER,
"\r\0\0\0\0\0\0\0@k\211\240\377\177\0\0", 16) = 0
bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000002}, 12) = 0
getsockname(3, {sa_family=AF_NETLINK, pid=1192, groups=00000002}, [12])
= 0
setsockopt(3, SOL_SOCKET, SO_PASSCRED, [1], 4) = 0
open("/sys/bus/rbd/add_single_major", O_WRONLY) = 4
write(4, "10.0.0.141:6789 name=admin,key=c"..., 61
Any idea why I cannot access the rbd device anymore?
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com