Re: FW: CephFS concurrency question

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Apr 22, 2015 at 3:36 PM, Neville <neville.taylor@xxxxxxxxxxxxx> wrote:
> Just realised this never went to the group, sorry folks.
>
> Is it worth me trying the FUSE driver, is that likely to make a difference
> in this type of scenario? I'm still concerned whether what I'm trying to do
> with CephFS is even supposed to work like this. Ignoring the
> Openstack/libvirt parts for the moment should I be able to create a file and
> read from it from two different hosts at the same time, can anyone confirm?
>
> ________________________________
> From: neville.taylor@xxxxxxxxxxxxx
> To: john.spray@xxxxxxxxxx
> Subject: RE:  CephFS concurrency question
> Date: Tue, 21 Apr 2015 16:20:59 +0100
>
> Hi John,
>
> I'm using different pools and users for cinder volumes and CephFS. I've
> created a CephFS user which has rwx on the CephFS pool, I then exported the
> key to a secret file and pass that in the command to mount CephFS from
> within /etc/fstab as follows:
>
> X.X.X.X:6789:/       /var/lib/nova/instances ceph
> name=cephfs,secretfile=/etc/ceph/cephfs.secret,noatime         0       2
>
> If I shutdown either host everything works fine so I'm not sure it's a
> permissions thing. It only seems to go wrong once I try to access a file
> from a host after it has already been accessed on the other.
>

Please use direct IO to do the test. If file is created successflly,
but get EPERM when writing data to it, It's almost certainly an
authentication issue.

dd if=/dev/zero bs=4k count=1 of=file oflag=direct
dd if=file bs=4k count=1 iflag=direct | od -x


> Thanks,
>
> Neville
>
>
> ________________________________
> Date: Tue, 21 Apr 2015 15:31:40 +0100
> From: john.spray@xxxxxxxxxx
> To: neville.taylor@xxxxxxxxxxxxx; ceph-users@xxxxxxxxxxxxxx
>
> Subject: Re:  CephFS concurrency question
>
> On 21/04/15 13:43, Neville wrote:
>
> To test this further I tried the following basic tests:
>
> On Host 2:
>
> root@devops-kvm02:/var/lib/nova/instances# echo hello > test
> root@devops-kvm02:/var/lib/nova/instances# cat test
> hello
> root@devops-kvm02:/var/lib/nova/instances#
>
> Then from Host 1:
>
> root@devops-kvm01:/var/lib/nova/instances# cat test
> cat: test: Operation not permitted
> root@devops-kvm01:/var/lib/nova/instances#
>
> Then back on Host 2:
>
> root@devops-kvm02:/var/lib/nova/instances# cat test
> cat: test: Operation not permitted
> root@devops-kvm02:/var/lib/nova/instances#
>
> Should this even work? My understanding is CephFS allows concurrent access
> but I'm not sure if there is some file locking going on that I need to
> understand.
>
>
> You might want to check your OSD authentication keys for the client hosts.
> The results above seem consistent with settings that forbid the clients from
> reading objects from the CephFS data pool (kvm02 can initially read because
> it has its written data in cache).  Perhaps your hosts have keys set up that
> explicitly limit their access to the RBD pools, and don't take account of
> the CephFS data pool.
>
> John
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@xxxxxxxxxxxxxx
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@xxxxxxxxxxxxxx
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




[Index of Archives]     [Information on CEPH]     [Linux Filesystem Development]     [Ceph Development]     [Ceph Large]     [Ceph Dev]     [Linux USB Development]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]     [xfs]


  Powered by Linux