I don't know enough on ocfs to help. Sounds like you have unconccurent writes though
Sent from TypeMail
On Oct 15, 2015, at 1:53 AM, gjprabu <gjprabu@xxxxxxxxxxxx> wrote:
Hi Tyler,Can please send me the next setup action to be taken on this issue.RegardsPrabu---- On Wed, 14 Oct 2015 13:43:29 +0530 gjprabu <gjprabu@xxxxxxxxxxxx> wrote ----Hi Tyler,Thanks for your reply. We have disabled rbd_cache but still issue is persist. Please find our configuration file.# cat /etc/ceph/ceph.conf[global]fsid = 944fa0af-b7be-45a9-93ff-b9907cfaee3fmon_initial_members = integ-hm5, integ-hm6, integ-hm7mon_host = 192.168.112.192,192.168.112.193,192.168.112.194auth_cluster_required = cephxauth_service_required = cephxauth_client_required = cephxfilestore_xattr_use_omap = trueosd_pool_default_size = 2[mon]mon_clock_drift_allowed = .500[client]rbd_cache = false--------------------------------------------------------------------------------------cluster 944fa0af-b7be-45a9-93ff-b9907cfaee3fhealth HEALTH_OKmonmap e2: 3 mons at {integ-hm5=192.168.112.192:6789/0,integ-hm6=192.168.112.193:6789/0,integ-hm7=192.168.112.194:6789/0}election epoch 480, quorum 0,1,2 integ-hm5,integ-hm6,integ-hm7osdmap e49780: 2 osds: 2 up, 2 inpgmap v2256565: 190 pgs, 2 pools, 1364 GB data, 410 kobjects2559 GB used, 21106 GB / 24921 GB avail190 active+cleanclient io 373 kB/s rd, 13910 B/s wr, 103 op/sRegardsPrabuYou need to disable RBD caching.
Tyler Bishop
Chief Technical Officer
513-299-7108 x10If you are not the intended recipient of this transmission you are notified that disclosing, copying, distributing or taking any action in reliance on the contents of this information is strictly prohibited.
From: "gjprabu" <gjprabu@xxxxxxxxxxxx>To: "Frédéric Nass" <frederic.nass@xxxxxxxxxxxxxxxx>Cc: "<ceph-users@xxxxxxxxxxxxxx>" <ceph-users@xxxxxxxxxxxxxx>, "Siva Sokkumuthu" <sivakumar@xxxxxxxxxxxx>, "Kamal Kannan Subramani(kamalakannan)" <kamal@xxxxxxxxxxxxxxxx>Sent: Tuesday, October 13, 2015 9:11:30 AMSubject: Re: ceph same rbd on multiple clientHi ,We have CEPH RBD with OCFS2 mounted servers. we are facing i/o errors simultaneously while move the folder using one nodes in the same disk other nodes data replicating with below said error (Copying is not having any problem). Workaround if we remount the partition this issue get resolved but after sometime problem again reoccurred. please help on this issue.Note : We have total 5 Nodes, here two nodes working fine other nodes are showing like below input/output error on moved data's.ls -althrls: cannot access LITE_3_0_M4_1_TEST: Input/output errorls: cannot access LITE_3_0_M4_1_OLD: Input/output errortotal 0d????????? ? ? ? ? ? LITE_3_0_M4_1_TESTd????????? ? ? ? ? ? LITE_3_0_M4_1_OLDRegardsPrabu---- On Fri, 22 May 2015 17:33:04 +0530 Frédéric Nass <frederic.nass@xxxxxxxxxxxxxxxx> wrote ----Hi,Waiting for CephFS, you can use clustered filesystem like OCFS2 or GFS2 on top of RBD mappings so that each host can access the same device and clustered filesystem.Regards,Frédéric.Le 21/05/2015 16:10, gjprabu a écrit :-- Frédéric Nass Sous direction des Infrastructures, Direction du Numérique, Université de Lorraine. Tél : 03.83.68.53.83_______________________________________________ceph-users mailing listHi All,We are using rbd and map the same rbd image to the rbd device on two different client but i can't see the data until i umount and mount -a partition. Kindly share the solution for this issue.Examplecreate rbd image named foomap foo to /dev/rbd0 on server A, mount /dev/rbd0 to /mntmap foo to /dev/rbd0 on server B, mount /dev/rbd0 to /mntRegardsPrabu_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com_______________________________________________ceph-users mailing list
_______________________________________________ ceph-users mailing list ceph-users@xxxxxxxxxxxxxx http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com