Dear All, We have set up ceph and used it for about one year already. Here is a summary of the setting. We used 3 servers to run the ceph. cs02, cs03, cs04 Here is how we set up the ceph: 1. We created several OSDs on three of these servers. using command like: > ceph-deploy osd create cs02:/dev/sdc …. cs03:/dev/… cs04:/dev/…. 2. And have created MDS on cs02: > ceph-deploy mds create ilab-cs02 3. After that, we have created a RADOS block device on cs02 by > rbd create rbd-research --size 10240000 4. Then mapped rbd-research > sudo tbd map rbd-resrearch —pool rbd 5. Then make file system > sudo mkfs.ext4 /dev/rbd/rbd/rbd-research 6. Then mkdir and mount the rbd by adding this line to /etc/fstab /dev/rbd/rbd/rbd-research /mnt/retinadata ext4 defaults,users 0 2 7. Then mount > mount /mnt/retinadata It worked reliably until recently we had a power off of our servers accidentally. After power recovered. cs03, cs04 were automatically boot up, while cs02 were not automatically boot up. There is a message shown on cs02 telling something like “not able to mount /mnt/retinadata, device not found, press S to ignore and continue booting, press M to manual configure”. We selected S and booted up the system. Then we found that, /mnt/retinadata was not mount and the rbd image at /dev/rbd/rbd/rbd1 was not there. We map the rbd image once again by. > sudo tbd map rbd-research —pool rbd Then we were able to mount /mnt/retinadata But the result we have now is : 1. All the file system structures are there. 2. All the files are of 0 byte size. Could anybody help on this issue? Thank you very much in advance. Some more information. We tried to reboot cs02 again. And we see a full screen of error message like: [44038.215233] libceph: connect 192.168.1.31:6789 socket error on write [44038.215308] libceph: mon1 192.1.168.31:6789 error -101 libceph: connect 192.168.1.41:6812 error -101 libceph: osd22 192.168.1.41:6812 socket error on write Best Regards, Cyan Cheng |
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com