Hi ,
We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors simultaneously while move the folder using one nodes in the same disk other nodes data replicating with below said error (Copying is not having any problem). Workaround if we remount the partition this issue get resolved but after sometime problem again reoccurred. please help on this issue.
Note : We have total 5 Nodes, here two nodes working fine other nodes are showing like below input/output error on moved data's.
ls -althr
ls: cannot access LITE_3_0_M4_1_TEST: Input/output error
ls: cannot access LITE_3_0_M4_1_OLD: Input/output error
total 0
d????????? ? ? ? ? ? LITE_3_0_M4_1_TEST
d????????? ? ? ? ? ? LITE_3_0_M4_1_OLD
Regards
Prabu
---- On Fri, 22 May 2015 17:33:04 +0530 Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> wrote ----
Hi,Waiting for CephFS, you can use clustered filesystem like OCFS2 or GFS2 on top of RBD mappings so that each host can access the same device and clustered filesystem.Regards,Frédéric.Le 21/05/2015 16:10, gjprabu a écrit :-- Frédéric Nass Sous direction des Infrastructures, Direction du Numérique, Université de Lorraine. Tél : 03.83.68.53.83_______________________________________________ceph-users mailing listHi All,We are using rbd and map the same rbd image to the rbd device on two different client but i can't see the data until i umount and mount -a partition. Kindly share the solution for this issue.Examplecreate rbd image named foomap foo to /dev/rbd0 on server A, mount /dev/rbd0 to /mntmap foo to /dev/rbd0 on server B, mount /dev/rbd0 to /mntRegardsPrabu_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com